mirror of
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
synced 2025-09-04 20:19:47 +08:00
Including fixes from Bluetooth.
Current release - fix to a fix: - usb: asix_devices: fix PHY address mask in MDIO bus initialization Current release - regressions: - Bluetooth: fixes for the split between BIS_LINK and PA_LINK - Revert "net: cadence: macb: sama7g5_emac: Remove USARIO CLKEN flag", breaks compatibility with some existing device tree blobs - dsa: b53: fix reserved register access in b53_fdb_dump() Current release - new code bugs: - sched: dualpi2: run probability update timer in BH to avoid deadlock - eth: libwx: fix the size in RSS hash key population - pse-pd: pd692x0: improve power budget error paths and handling Previous releases - regressions: - tls: fix handling of zero-length records on the rx_list - hsr: reject HSR frame if skb can't hold tag - bonding: fix negotiation flapping in 802.3ad passive mode Previous releases - always broken: - gso: forbid IPv6 TSO with extensions on devices with only IPV6_CSUM - sched: make cake_enqueue return NET_XMIT_CN when past buffer_limit, avoid packet drops with low buffer_limit, remove unnecessary WARN() - sched: fix backlog accounting after modifying config of a qdisc in the middle of the hierarchy - mptcp: improve handling of skb extension allocation failures - eth: mlx5: - fixes for the "HW Steering" flow management method - fixes for QoS and device buffer management Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAminVEUACgkQMUZtbf5S Irtt/g//YoT8G6Wzv3unTf+qYl/pia7936VqW/W51A7UyCZp2bfDi8/r8eIl+bCf 8KbUH2nSTUPEsngRrzoXhbVAXs6AFe3WKA89efCeal+GFPPA43A5+BZKaXBRE14M DgSCo1mYjrVIMENv33LttjQOYjLIBu4finciqIkLj/rOeyRZUURlModQW+aA800K QhXKt0BYniL7tg4ChW3/TQMNpRVLnfae6cu/CKjv/rmzdioVZRChC/cGPI0xctpq kAIJa49i6dNGtNMWFtS+YaDjTFWS05R9y5bSiylLUnqply9+sWrF1VZIGHXMOcv8 uzxLIlZhvSQycETMinPf0w02H3PuuqpKH8X+1c77Z7/wHmCn5ht2QVv1BwgFu4hv MZm+zRYKZDdkdYCqujD/XmqF/D7t49LIql8WctxwgWus2yTfg089+9H2VHEj2kyl S6FBI6f3ZwwLGh6MCAcqswbGc/6QSVFg2ndW6QFxl85Py3fTefV0Y3OknSkXqom/ 2azpWNX6zoKfB6QSW4ZrfPFvuDAV8Ai5OJCTpw/3gkbquUMgx+mN2wf2/t8hPbci Qkh5Y9/44gQFY9KaLx8N7Xr1FBTQ4c21QnG67gvflw3mNMdr3ot0isGTCtjty/eC abQ+KvQwxy8rVnlunYC8REAAFxQi3AtCiFBE/vioVApNQtsqm68= =waf0 -----END PGP SIGNATURE----- Merge tag 'net-6.17-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "Including fixes from Bluetooth. Current release - fix to a fix: - usb: asix_devices: fix PHY address mask in MDIO bus initialization Current release - regressions: - Bluetooth: fixes for the split between BIS_LINK and PA_LINK - Revert "net: cadence: macb: sama7g5_emac: Remove USARIO CLKEN flag", breaks compatibility with some existing device tree blobs - dsa: b53: fix reserved register access in b53_fdb_dump() Current release - new code bugs: - sched: dualpi2: run probability update timer in BH to avoid deadlock - eth: libwx: fix the size in RSS hash key population - pse-pd: pd692x0: improve power budget error paths and handling Previous releases - regressions: - tls: fix handling of zero-length records on the rx_list - hsr: reject HSR frame if skb can't hold tag - bonding: fix negotiation flapping in 802.3ad passive mode Previous releases - always broken: - gso: forbid IPv6 TSO with extensions on devices with only IPV6_CSUM - sched: make cake_enqueue return NET_XMIT_CN when past buffer_limit, avoid packet drops with low buffer_limit, remove unnecessary WARN() - sched: fix backlog accounting after modifying config of a qdisc in the middle of the hierarchy - mptcp: improve handling of skb extension allocation failures - eth: mlx5: - fixes for the "HW Steering" flow management method - fixes for QoS and device buffer management" * tag 'net-6.17-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (81 commits) netfilter: nf_reject: don't leak dst refcount for loopback packets net/mlx5e: Preserve shared buffer capacity during headroom updates net/mlx5e: Query FW for buffer ownership net/mlx5: Restore missing scheduling node cleanup on vport enable failure net/mlx5: Fix QoS reference leak in vport enable error path net/mlx5: Destroy vport QoS element when no configuration remains net/mlx5e: Preserve tc-bw during parent changes net/mlx5: Remove default QoS group and attach vports directly to root TSAR net/mlx5: Base ECVF devlink port attrs from 0 net: pse-pd: pd692x0: Skip power budget configuration when undefined net: pse-pd: pd692x0: Fix power budget leak in manager setup error path Octeontx2-af: Skip overlap check for SPI field selftests: tls: add tests for zero-length records tls: fix handling of zero-length records on the rx_list net: airoha: ppe: Do not invalid PPE entries in case of SW hash collision selftests: bonding: add test for passive LACP mode bonding: send LACPDUs periodically in passive mode after receiving partner's LACPDU bonding: update LACP activity flag after setting lacp_active Revert "net: cadence: macb: sama7g5_emac: Remove USARIO CLKEN flag" ipv6: sr: Fix MAC comparison to be constant-time ...
This commit is contained in:
commit
6439a0e64c
@ -12,6 +12,8 @@ add_addr_timeout - INTEGER (seconds)
|
||||
resent to an MPTCP peer that has not acknowledged a previous
|
||||
ADD_ADDR message.
|
||||
|
||||
Do not retransmit if set to 0.
|
||||
|
||||
The default value matches TCP_RTO_MAX. This is a per-namespace
|
||||
sysctl.
|
||||
|
||||
|
@ -22174,7 +22174,7 @@ F: arch/s390/mm
|
||||
|
||||
S390 NETWORK DRIVERS
|
||||
M: Alexandra Winter <wintera@linux.ibm.com>
|
||||
M: Thorsten Winkler <twinkler@linux.ibm.com>
|
||||
R: Aswin Karuvally <aswin@linux.ibm.com>
|
||||
L: linux-s390@vger.kernel.org
|
||||
L: netdev@vger.kernel.org
|
||||
S: Supported
|
||||
|
@ -642,12 +642,7 @@ static int btmtk_usb_hci_wmt_sync(struct hci_dev *hdev,
|
||||
* WMT command.
|
||||
*/
|
||||
err = wait_on_bit_timeout(&data->flags, BTMTK_TX_WAIT_VND_EVT,
|
||||
TASK_INTERRUPTIBLE, HCI_INIT_TIMEOUT);
|
||||
if (err == -EINTR) {
|
||||
bt_dev_err(hdev, "Execution of wmt command interrupted");
|
||||
clear_bit(BTMTK_TX_WAIT_VND_EVT, &data->flags);
|
||||
goto err_free_wc;
|
||||
}
|
||||
TASK_UNINTERRUPTIBLE, HCI_INIT_TIMEOUT);
|
||||
|
||||
if (err) {
|
||||
bt_dev_err(hdev, "Execution of wmt command timed out");
|
||||
|
@ -543,9 +543,9 @@ static int ps_setup(struct hci_dev *hdev)
|
||||
}
|
||||
|
||||
if (psdata->wakeup_source) {
|
||||
ret = devm_request_irq(&serdev->dev, psdata->irq_handler,
|
||||
ps_host_wakeup_irq_handler,
|
||||
IRQF_ONESHOT | IRQF_TRIGGER_FALLING,
|
||||
ret = devm_request_threaded_irq(&serdev->dev, psdata->irq_handler,
|
||||
NULL, ps_host_wakeup_irq_handler,
|
||||
IRQF_ONESHOT,
|
||||
dev_name(&serdev->dev), nxpdev);
|
||||
if (ret)
|
||||
bt_dev_info(hdev, "error setting wakeup IRQ handler, ignoring\n");
|
||||
|
@ -95,13 +95,13 @@ static int ad_marker_send(struct port *port, struct bond_marker *marker);
|
||||
static void ad_mux_machine(struct port *port, bool *update_slave_arr);
|
||||
static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port);
|
||||
static void ad_tx_machine(struct port *port);
|
||||
static void ad_periodic_machine(struct port *port, struct bond_params *bond_params);
|
||||
static void ad_periodic_machine(struct port *port);
|
||||
static void ad_port_selection_logic(struct port *port, bool *update_slave_arr);
|
||||
static void ad_agg_selection_logic(struct aggregator *aggregator,
|
||||
bool *update_slave_arr);
|
||||
static void ad_clear_agg(struct aggregator *aggregator);
|
||||
static void ad_initialize_agg(struct aggregator *aggregator);
|
||||
static void ad_initialize_port(struct port *port, int lacp_fast);
|
||||
static void ad_initialize_port(struct port *port, const struct bond_params *bond_params);
|
||||
static void ad_enable_collecting(struct port *port);
|
||||
static void ad_disable_distributing(struct port *port,
|
||||
bool *update_slave_arr);
|
||||
@ -1307,10 +1307,16 @@ static void ad_rx_machine(struct lacpdu *lacpdu, struct port *port)
|
||||
* case of EXPIRED even if LINK_DOWN didn't arrive for
|
||||
* the port.
|
||||
*/
|
||||
port->partner_oper.port_state &= ~LACP_STATE_SYNCHRONIZATION;
|
||||
port->sm_vars &= ~AD_PORT_MATCHED;
|
||||
/* Based on IEEE 8021AX-2014, Figure 6-18 - Receive
|
||||
* machine state diagram, the statue should be
|
||||
* Partner_Oper_Port_State.Synchronization = FALSE;
|
||||
* Partner_Oper_Port_State.LACP_Timeout = Short Timeout;
|
||||
* start current_while_timer(Short Timeout);
|
||||
* Actor_Oper_Port_State.Expired = TRUE;
|
||||
*/
|
||||
port->partner_oper.port_state &= ~LACP_STATE_SYNCHRONIZATION;
|
||||
port->partner_oper.port_state |= LACP_STATE_LACP_TIMEOUT;
|
||||
port->partner_oper.port_state |= LACP_STATE_LACP_ACTIVITY;
|
||||
port->sm_rx_timer_counter = __ad_timer_to_ticks(AD_CURRENT_WHILE_TIMER, (u16)(AD_SHORT_TIMEOUT));
|
||||
port->actor_oper_port_state |= LACP_STATE_EXPIRED;
|
||||
port->sm_vars |= AD_PORT_CHURNED;
|
||||
@ -1417,11 +1423,10 @@ static void ad_tx_machine(struct port *port)
|
||||
/**
|
||||
* ad_periodic_machine - handle a port's periodic state machine
|
||||
* @port: the port we're looking at
|
||||
* @bond_params: bond parameters we will use
|
||||
*
|
||||
* Turn ntt flag on priodically to perform periodic transmission of lacpdu's.
|
||||
*/
|
||||
static void ad_periodic_machine(struct port *port, struct bond_params *bond_params)
|
||||
static void ad_periodic_machine(struct port *port)
|
||||
{
|
||||
periodic_states_t last_state;
|
||||
|
||||
@ -1430,8 +1435,7 @@ static void ad_periodic_machine(struct port *port, struct bond_params *bond_para
|
||||
|
||||
/* check if port was reinitialized */
|
||||
if (((port->sm_vars & AD_PORT_BEGIN) || !(port->sm_vars & AD_PORT_LACP_ENABLED) || !port->is_enabled) ||
|
||||
(!(port->actor_oper_port_state & LACP_STATE_LACP_ACTIVITY) && !(port->partner_oper.port_state & LACP_STATE_LACP_ACTIVITY)) ||
|
||||
!bond_params->lacp_active) {
|
||||
(!(port->actor_oper_port_state & LACP_STATE_LACP_ACTIVITY) && !(port->partner_oper.port_state & LACP_STATE_LACP_ACTIVITY))) {
|
||||
port->sm_periodic_state = AD_NO_PERIODIC;
|
||||
}
|
||||
/* check if state machine should change state */
|
||||
@ -1955,16 +1959,16 @@ static void ad_initialize_agg(struct aggregator *aggregator)
|
||||
/**
|
||||
* ad_initialize_port - initialize a given port's parameters
|
||||
* @port: the port we're looking at
|
||||
* @lacp_fast: boolean. whether fast periodic should be used
|
||||
* @bond_params: bond parameters we will use
|
||||
*/
|
||||
static void ad_initialize_port(struct port *port, int lacp_fast)
|
||||
static void ad_initialize_port(struct port *port, const struct bond_params *bond_params)
|
||||
{
|
||||
static const struct port_params tmpl = {
|
||||
.system_priority = 0xffff,
|
||||
.key = 1,
|
||||
.port_number = 1,
|
||||
.port_priority = 0xff,
|
||||
.port_state = 1,
|
||||
.port_state = 0,
|
||||
};
|
||||
static const struct lacpdu lacpdu = {
|
||||
.subtype = 0x01,
|
||||
@ -1982,12 +1986,14 @@ static void ad_initialize_port(struct port *port, int lacp_fast)
|
||||
port->actor_port_priority = 0xff;
|
||||
port->actor_port_aggregator_identifier = 0;
|
||||
port->ntt = false;
|
||||
port->actor_admin_port_state = LACP_STATE_AGGREGATION |
|
||||
LACP_STATE_LACP_ACTIVITY;
|
||||
port->actor_oper_port_state = LACP_STATE_AGGREGATION |
|
||||
LACP_STATE_LACP_ACTIVITY;
|
||||
port->actor_admin_port_state = LACP_STATE_AGGREGATION;
|
||||
port->actor_oper_port_state = LACP_STATE_AGGREGATION;
|
||||
if (bond_params->lacp_active) {
|
||||
port->actor_admin_port_state |= LACP_STATE_LACP_ACTIVITY;
|
||||
port->actor_oper_port_state |= LACP_STATE_LACP_ACTIVITY;
|
||||
}
|
||||
|
||||
if (lacp_fast)
|
||||
if (bond_params->lacp_fast)
|
||||
port->actor_oper_port_state |= LACP_STATE_LACP_TIMEOUT;
|
||||
|
||||
memcpy(&port->partner_admin, &tmpl, sizeof(tmpl));
|
||||
@ -2201,7 +2207,7 @@ void bond_3ad_bind_slave(struct slave *slave)
|
||||
/* port initialization */
|
||||
port = &(SLAVE_AD_INFO(slave)->port);
|
||||
|
||||
ad_initialize_port(port, bond->params.lacp_fast);
|
||||
ad_initialize_port(port, &bond->params);
|
||||
|
||||
port->slave = slave;
|
||||
port->actor_port_number = SLAVE_AD_INFO(slave)->id;
|
||||
@ -2513,7 +2519,7 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
|
||||
}
|
||||
|
||||
ad_rx_machine(NULL, port);
|
||||
ad_periodic_machine(port, &bond->params);
|
||||
ad_periodic_machine(port);
|
||||
ad_port_selection_logic(port, &update_slave_arr);
|
||||
ad_mux_machine(port, &update_slave_arr);
|
||||
ad_tx_machine(port);
|
||||
@ -2883,6 +2889,31 @@ void bond_3ad_update_lacp_rate(struct bonding *bond)
|
||||
spin_unlock_bh(&bond->mode_lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* bond_3ad_update_lacp_active - change the lacp active
|
||||
* @bond: bonding struct
|
||||
*
|
||||
* Update actor_oper_port_state when lacp_active is modified.
|
||||
*/
|
||||
void bond_3ad_update_lacp_active(struct bonding *bond)
|
||||
{
|
||||
struct port *port = NULL;
|
||||
struct list_head *iter;
|
||||
struct slave *slave;
|
||||
int lacp_active;
|
||||
|
||||
lacp_active = bond->params.lacp_active;
|
||||
spin_lock_bh(&bond->mode_lock);
|
||||
bond_for_each_slave(bond, slave, iter) {
|
||||
port = &(SLAVE_AD_INFO(slave)->port);
|
||||
if (lacp_active)
|
||||
port->actor_oper_port_state |= LACP_STATE_LACP_ACTIVITY;
|
||||
else
|
||||
port->actor_oper_port_state &= ~LACP_STATE_LACP_ACTIVITY;
|
||||
}
|
||||
spin_unlock_bh(&bond->mode_lock);
|
||||
}
|
||||
|
||||
size_t bond_3ad_stats_size(void)
|
||||
{
|
||||
return nla_total_size_64bit(sizeof(u64)) + /* BOND_3AD_STAT_LACPDU_RX */
|
||||
|
@ -1660,6 +1660,7 @@ static int bond_option_lacp_active_set(struct bonding *bond,
|
||||
netdev_dbg(bond->dev, "Setting LACP active to %s (%llu)\n",
|
||||
newval->string, newval->value);
|
||||
bond->params.lacp_active = newval->value;
|
||||
bond_3ad_update_lacp_active(bond);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -2078,7 +2078,7 @@ int b53_fdb_dump(struct dsa_switch *ds, int port,
|
||||
|
||||
/* Start search operation */
|
||||
reg = ARL_SRCH_STDN;
|
||||
b53_write8(priv, offset, B53_ARL_SRCH_CTL, reg);
|
||||
b53_write8(priv, B53_ARLIO_PAGE, offset, reg);
|
||||
|
||||
do {
|
||||
ret = b53_arl_search_wait(priv);
|
||||
|
@ -2457,6 +2457,12 @@ static void ksz_update_port_member(struct ksz_device *dev, int port)
|
||||
dev->dev_ops->cfg_port_member(dev, i, val | cpu_port);
|
||||
}
|
||||
|
||||
/* HSR ports are setup once so need to use the assigned membership
|
||||
* when the port is enabled.
|
||||
*/
|
||||
if (!port_member && p->stp_state == BR_STATE_FORWARDING &&
|
||||
(dev->hsr_ports & BIT(port)))
|
||||
port_member = dev->hsr_ports;
|
||||
dev->dev_ops->cfg_port_member(dev, port, port_member | cpu_port);
|
||||
}
|
||||
|
||||
|
@ -736,10 +736,8 @@ static void airoha_ppe_foe_insert_entry(struct airoha_ppe *ppe,
|
||||
continue;
|
||||
}
|
||||
|
||||
if (commit_done || !airoha_ppe_foe_compare_entry(e, hwe)) {
|
||||
e->hash = 0xffff;
|
||||
if (!airoha_ppe_foe_compare_entry(e, hwe))
|
||||
continue;
|
||||
}
|
||||
|
||||
airoha_ppe_foe_commit_entry(ppe, &e->data, hash);
|
||||
commit_done = true;
|
||||
|
@ -5332,7 +5332,7 @@ static void bnxt_free_ntp_fltrs(struct bnxt *bp, bool all)
|
||||
{
|
||||
int i;
|
||||
|
||||
netdev_assert_locked(bp->dev);
|
||||
netdev_assert_locked_or_invisible(bp->dev);
|
||||
|
||||
/* Under netdev instance lock and all our NAPIs have been disabled.
|
||||
* It's safe to delete the hash table.
|
||||
|
@ -5113,7 +5113,8 @@ static const struct macb_config sama7g5_gem_config = {
|
||||
|
||||
static const struct macb_config sama7g5_emac_config = {
|
||||
.caps = MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII |
|
||||
MACB_CAPS_MIIONRGMII | MACB_CAPS_GEM_HAS_PTP,
|
||||
MACB_CAPS_USRIO_HAS_CLKEN | MACB_CAPS_MIIONRGMII |
|
||||
MACB_CAPS_GEM_HAS_PTP,
|
||||
.dma_burst_length = 16,
|
||||
.clk_init = macb_clk_init,
|
||||
.init = macb_init,
|
||||
|
@ -2870,6 +2870,8 @@ static void gve_shutdown(struct pci_dev *pdev)
|
||||
struct gve_priv *priv = netdev_priv(netdev);
|
||||
bool was_up = netif_running(priv->dev);
|
||||
|
||||
netif_device_detach(netdev);
|
||||
|
||||
rtnl_lock();
|
||||
netdev_lock(netdev);
|
||||
if (was_up && gve_close(priv->dev)) {
|
||||
|
@ -7149,6 +7149,13 @@ static int igc_probe(struct pci_dev *pdev,
|
||||
adapter->port_num = hw->bus.func;
|
||||
adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);
|
||||
|
||||
/* PCI config space info */
|
||||
hw->vendor_id = pdev->vendor;
|
||||
hw->device_id = pdev->device;
|
||||
hw->revision_id = pdev->revision;
|
||||
hw->subsystem_vendor_id = pdev->subsystem_vendor;
|
||||
hw->subsystem_device_id = pdev->subsystem_device;
|
||||
|
||||
/* Disable ASPM L1.2 on I226 devices to avoid packet loss */
|
||||
if (igc_is_device_id_i226(hw))
|
||||
pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2);
|
||||
@ -7175,13 +7182,6 @@ static int igc_probe(struct pci_dev *pdev,
|
||||
netdev->mem_start = pci_resource_start(pdev, 0);
|
||||
netdev->mem_end = pci_resource_end(pdev, 0);
|
||||
|
||||
/* PCI config space info */
|
||||
hw->vendor_id = pdev->vendor;
|
||||
hw->device_id = pdev->device;
|
||||
hw->revision_id = pdev->revision;
|
||||
hw->subsystem_vendor_id = pdev->subsystem_vendor;
|
||||
hw->subsystem_device_id = pdev->subsystem_device;
|
||||
|
||||
/* Copy the default MAC and PHY function pointers */
|
||||
memcpy(&hw->mac.ops, ei->mac_ops, sizeof(hw->mac.ops));
|
||||
memcpy(&hw->phy.ops, ei->phy_ops, sizeof(hw->phy.ops));
|
||||
|
@ -968,10 +968,6 @@ static void ixgbe_update_xoff_rx_lfc(struct ixgbe_adapter *adapter)
|
||||
for (i = 0; i < adapter->num_tx_queues; i++)
|
||||
clear_bit(__IXGBE_HANG_CHECK_ARMED,
|
||||
&adapter->tx_ring[i]->state);
|
||||
|
||||
for (i = 0; i < adapter->num_xdp_queues; i++)
|
||||
clear_bit(__IXGBE_HANG_CHECK_ARMED,
|
||||
&adapter->xdp_ring[i]->state);
|
||||
}
|
||||
|
||||
static void ixgbe_update_xoff_received(struct ixgbe_adapter *adapter)
|
||||
@ -1214,7 +1210,7 @@ static void ixgbe_pf_handle_tx_hang(struct ixgbe_ring *tx_ring,
|
||||
struct ixgbe_adapter *adapter = netdev_priv(tx_ring->netdev);
|
||||
struct ixgbe_hw *hw = &adapter->hw;
|
||||
|
||||
e_err(drv, "Detected Tx Unit Hang%s\n"
|
||||
e_err(drv, "Detected Tx Unit Hang\n"
|
||||
" Tx Queue <%d>\n"
|
||||
" TDH, TDT <%x>, <%x>\n"
|
||||
" next_to_use <%x>\n"
|
||||
@ -1222,14 +1218,12 @@ static void ixgbe_pf_handle_tx_hang(struct ixgbe_ring *tx_ring,
|
||||
"tx_buffer_info[next_to_clean]\n"
|
||||
" time_stamp <%lx>\n"
|
||||
" jiffies <%lx>\n",
|
||||
ring_is_xdp(tx_ring) ? " (XDP)" : "",
|
||||
tx_ring->queue_index,
|
||||
IXGBE_READ_REG(hw, IXGBE_TDH(tx_ring->reg_idx)),
|
||||
IXGBE_READ_REG(hw, IXGBE_TDT(tx_ring->reg_idx)),
|
||||
tx_ring->next_to_use, next,
|
||||
tx_ring->tx_buffer_info[next].time_stamp, jiffies);
|
||||
|
||||
if (!ring_is_xdp(tx_ring))
|
||||
netif_stop_subqueue(tx_ring->netdev,
|
||||
tx_ring->queue_index);
|
||||
}
|
||||
@ -1451,6 +1445,9 @@ static bool ixgbe_clean_tx_irq(struct ixgbe_q_vector *q_vector,
|
||||
total_bytes);
|
||||
adapter->tx_ipsec += total_ipsec;
|
||||
|
||||
if (ring_is_xdp(tx_ring))
|
||||
return !!budget;
|
||||
|
||||
if (check_for_tx_hang(tx_ring) && ixgbe_check_tx_hang(tx_ring)) {
|
||||
if (adapter->hw.mac.type == ixgbe_mac_e610)
|
||||
ixgbe_handle_mdd_event(adapter, tx_ring);
|
||||
@ -1468,9 +1465,6 @@ static bool ixgbe_clean_tx_irq(struct ixgbe_q_vector *q_vector,
|
||||
return true;
|
||||
}
|
||||
|
||||
if (ring_is_xdp(tx_ring))
|
||||
return !!budget;
|
||||
|
||||
#define TX_WAKE_THRESHOLD (DESC_NEEDED * 2)
|
||||
txq = netdev_get_tx_queue(tx_ring->netdev, tx_ring->queue_index);
|
||||
if (!__netif_txq_completed_wake(txq, total_packets, total_bytes,
|
||||
@ -7974,12 +7968,9 @@ static void ixgbe_check_hang_subtask(struct ixgbe_adapter *adapter)
|
||||
return;
|
||||
|
||||
/* Force detection of hung controller */
|
||||
if (netif_carrier_ok(adapter->netdev)) {
|
||||
if (netif_carrier_ok(adapter->netdev))
|
||||
for (i = 0; i < adapter->num_tx_queues; i++)
|
||||
set_check_for_tx_hang(adapter->tx_ring[i]);
|
||||
for (i = 0; i < adapter->num_xdp_queues; i++)
|
||||
set_check_for_tx_hang(adapter->xdp_ring[i]);
|
||||
}
|
||||
|
||||
if (!(adapter->flags & IXGBE_FLAG_MSIX_ENABLED)) {
|
||||
/*
|
||||
@ -8199,13 +8190,6 @@ static bool ixgbe_ring_tx_pending(struct ixgbe_adapter *adapter)
|
||||
return true;
|
||||
}
|
||||
|
||||
for (i = 0; i < adapter->num_xdp_queues; i++) {
|
||||
struct ixgbe_ring *ring = adapter->xdp_ring[i];
|
||||
|
||||
if (ring->next_to_use != ring->next_to_clean)
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
@ -11005,6 +10989,10 @@ static int ixgbe_xdp_xmit(struct net_device *dev, int n,
|
||||
if (unlikely(test_bit(__IXGBE_DOWN, &adapter->state)))
|
||||
return -ENETDOWN;
|
||||
|
||||
if (!netif_carrier_ok(adapter->netdev) ||
|
||||
!netif_running(adapter->netdev))
|
||||
return -ENETDOWN;
|
||||
|
||||
if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
|
||||
return -EINVAL;
|
||||
|
||||
|
@ -398,7 +398,7 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
|
||||
dma_addr_t dma;
|
||||
u32 cmd_type;
|
||||
|
||||
while (budget-- > 0) {
|
||||
while (likely(budget)) {
|
||||
if (unlikely(!ixgbe_desc_unused(xdp_ring))) {
|
||||
work_done = false;
|
||||
break;
|
||||
@ -433,6 +433,8 @@ static bool ixgbe_xmit_zc(struct ixgbe_ring *xdp_ring, unsigned int budget)
|
||||
xdp_ring->next_to_use++;
|
||||
if (xdp_ring->next_to_use == xdp_ring->count)
|
||||
xdp_ring->next_to_use = 0;
|
||||
|
||||
budget--;
|
||||
}
|
||||
|
||||
if (tx_desc) {
|
||||
|
@ -606,8 +606,8 @@ static void npc_set_features(struct rvu *rvu, int blkaddr, u8 intf)
|
||||
if (!npc_check_field(rvu, blkaddr, NPC_LB, intf))
|
||||
*features &= ~BIT_ULL(NPC_OUTER_VID);
|
||||
|
||||
/* Set SPI flag only if AH/ESP and IPSEC_SPI are in the key */
|
||||
if (npc_check_field(rvu, blkaddr, NPC_IPSEC_SPI, intf) &&
|
||||
/* Allow extracting SPI field from AH and ESP headers at same offset */
|
||||
if (npc_is_field_present(rvu, NPC_IPSEC_SPI, intf) &&
|
||||
(*features & (BIT_ULL(NPC_IPPROTO_ESP) | BIT_ULL(NPC_IPPROTO_AH))))
|
||||
*features |= BIT_ULL(NPC_IPSEC_SPI);
|
||||
|
||||
|
@ -101,7 +101,9 @@ mtk_flow_get_wdma_info(struct net_device *dev, const u8 *addr, struct mtk_wdma_i
|
||||
if (!IS_ENABLED(CONFIG_NET_MEDIATEK_SOC_WED))
|
||||
return -1;
|
||||
|
||||
rcu_read_lock();
|
||||
err = dev_fill_forward_path(dev, addr, &stack);
|
||||
rcu_read_unlock();
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
@ -26,7 +26,6 @@ struct mlx5e_dcbx {
|
||||
u8 cap;
|
||||
|
||||
/* Buffer configuration */
|
||||
bool manual_buffer;
|
||||
u32 cable_len;
|
||||
u32 xoff;
|
||||
u16 port_buff_cell_sz;
|
||||
|
@ -272,8 +272,8 @@ static int port_update_shared_buffer(struct mlx5_core_dev *mdev,
|
||||
/* Total shared buffer size is split in a ratio of 3:1 between
|
||||
* lossy and lossless pools respectively.
|
||||
*/
|
||||
lossy_epool_size = (shared_buffer_size / 4) * 3;
|
||||
lossless_ipool_size = shared_buffer_size / 4;
|
||||
lossy_epool_size = shared_buffer_size - lossless_ipool_size;
|
||||
|
||||
mlx5e_port_set_sbpr(mdev, 0, MLX5_EGRESS_DIR, MLX5_LOSSY_POOL, 0,
|
||||
lossy_epool_size);
|
||||
@ -288,14 +288,12 @@ static int port_set_buffer(struct mlx5e_priv *priv,
|
||||
u16 port_buff_cell_sz = priv->dcbx.port_buff_cell_sz;
|
||||
struct mlx5_core_dev *mdev = priv->mdev;
|
||||
int sz = MLX5_ST_SZ_BYTES(pbmc_reg);
|
||||
u32 new_headroom_size = 0;
|
||||
u32 current_headroom_size;
|
||||
u32 current_headroom_cells = 0;
|
||||
u32 new_headroom_cells = 0;
|
||||
void *in;
|
||||
int err;
|
||||
int i;
|
||||
|
||||
current_headroom_size = port_buffer->headroom_size;
|
||||
|
||||
in = kzalloc(sz, GFP_KERNEL);
|
||||
if (!in)
|
||||
return -ENOMEM;
|
||||
@ -306,12 +304,14 @@ static int port_set_buffer(struct mlx5e_priv *priv,
|
||||
|
||||
for (i = 0; i < MLX5E_MAX_NETWORK_BUFFER; i++) {
|
||||
void *buffer = MLX5_ADDR_OF(pbmc_reg, in, buffer[i]);
|
||||
current_headroom_cells += MLX5_GET(bufferx_reg, buffer, size);
|
||||
|
||||
u64 size = port_buffer->buffer[i].size;
|
||||
u64 xoff = port_buffer->buffer[i].xoff;
|
||||
u64 xon = port_buffer->buffer[i].xon;
|
||||
|
||||
new_headroom_size += size;
|
||||
do_div(size, port_buff_cell_sz);
|
||||
new_headroom_cells += size;
|
||||
do_div(xoff, port_buff_cell_sz);
|
||||
do_div(xon, port_buff_cell_sz);
|
||||
MLX5_SET(bufferx_reg, buffer, size, size);
|
||||
@ -320,10 +320,8 @@ static int port_set_buffer(struct mlx5e_priv *priv,
|
||||
MLX5_SET(bufferx_reg, buffer, xon_threshold, xon);
|
||||
}
|
||||
|
||||
new_headroom_size /= port_buff_cell_sz;
|
||||
current_headroom_size /= port_buff_cell_sz;
|
||||
err = port_update_shared_buffer(priv->mdev, current_headroom_size,
|
||||
new_headroom_size);
|
||||
err = port_update_shared_buffer(priv->mdev, current_headroom_cells,
|
||||
new_headroom_cells);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
|
@ -173,6 +173,8 @@ static void mlx5_ct_fs_hmfs_fill_rule_actions(struct mlx5_ct_fs_hmfs *fs_hmfs,
|
||||
|
||||
memset(rule_actions, 0, NUM_CT_HMFS_RULES * sizeof(*rule_actions));
|
||||
rule_actions[0].action = mlx5_fc_get_hws_action(fs_hmfs->ctx, attr->counter);
|
||||
rule_actions[0].counter.offset =
|
||||
attr->counter->id - attr->counter->bulk->base_id;
|
||||
/* Modify header is special, it may require extra arguments outside the action itself. */
|
||||
if (mh_action->mh_data) {
|
||||
rule_actions[1].modify_header.offset = mh_action->mh_data->offset;
|
||||
|
@ -362,6 +362,7 @@ static int mlx5e_dcbnl_ieee_getpfc(struct net_device *dev,
|
||||
static int mlx5e_dcbnl_ieee_setpfc(struct net_device *dev,
|
||||
struct ieee_pfc *pfc)
|
||||
{
|
||||
u8 buffer_ownership = MLX5_BUF_OWNERSHIP_UNKNOWN;
|
||||
struct mlx5e_priv *priv = netdev_priv(dev);
|
||||
struct mlx5_core_dev *mdev = priv->mdev;
|
||||
u32 old_cable_len = priv->dcbx.cable_len;
|
||||
@ -389,7 +390,14 @@ static int mlx5e_dcbnl_ieee_setpfc(struct net_device *dev,
|
||||
|
||||
if (MLX5_BUFFER_SUPPORTED(mdev)) {
|
||||
pfc_new.pfc_en = (changed & MLX5E_PORT_BUFFER_PFC) ? pfc->pfc_en : curr_pfc_en;
|
||||
if (priv->dcbx.manual_buffer)
|
||||
ret = mlx5_query_port_buffer_ownership(mdev,
|
||||
&buffer_ownership);
|
||||
if (ret)
|
||||
netdev_err(dev,
|
||||
"%s, Failed to get buffer ownership: %d\n",
|
||||
__func__, ret);
|
||||
|
||||
if (buffer_ownership == MLX5_BUF_OWNERSHIP_SW_OWNED)
|
||||
ret = mlx5e_port_manual_buffer_config(priv, changed,
|
||||
dev->mtu, &pfc_new,
|
||||
NULL, NULL);
|
||||
@ -982,7 +990,6 @@ static int mlx5e_dcbnl_setbuffer(struct net_device *dev,
|
||||
if (!changed)
|
||||
return 0;
|
||||
|
||||
priv->dcbx.manual_buffer = true;
|
||||
err = mlx5e_port_manual_buffer_config(priv, changed, dev->mtu, NULL,
|
||||
buffer_size, prio2buffer);
|
||||
return err;
|
||||
@ -1252,7 +1259,6 @@ void mlx5e_dcbnl_initialize(struct mlx5e_priv *priv)
|
||||
priv->dcbx.cap |= DCB_CAP_DCBX_HOST;
|
||||
|
||||
priv->dcbx.port_buff_cell_sz = mlx5e_query_port_buffers_cell_size(priv);
|
||||
priv->dcbx.manual_buffer = false;
|
||||
priv->dcbx.cable_len = MLX5E_DEFAULT_CABLE_LEN;
|
||||
|
||||
mlx5e_ets_init(priv);
|
||||
|
@ -47,10 +47,12 @@ static void mlx5_esw_offloads_pf_vf_devlink_port_attrs_set(struct mlx5_eswitch *
|
||||
devlink_port_attrs_pci_vf_set(dl_port, controller_num, pfnum,
|
||||
vport_num - 1, external);
|
||||
} else if (mlx5_core_is_ec_vf_vport(esw->dev, vport_num)) {
|
||||
u16 base_vport = mlx5_core_ec_vf_vport_base(dev);
|
||||
|
||||
memcpy(dl_port->attrs.switch_id.id, ppid.id, ppid.id_len);
|
||||
dl_port->attrs.switch_id.id_len = ppid.id_len;
|
||||
devlink_port_attrs_pci_vf_set(dl_port, 0, pfnum,
|
||||
vport_num - 1, false);
|
||||
vport_num - base_vport, false);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -102,6 +102,8 @@ struct mlx5_esw_sched_node {
|
||||
u8 level;
|
||||
/* Valid only when this node represents a traffic class. */
|
||||
u8 tc;
|
||||
/* Valid only for a TC arbiter node or vport TC arbiter. */
|
||||
u32 tc_bw[DEVLINK_RATE_TCS_MAX];
|
||||
};
|
||||
|
||||
static void esw_qos_node_attach_to_parent(struct mlx5_esw_sched_node *node)
|
||||
@ -462,6 +464,7 @@ static int
|
||||
esw_qos_vport_create_sched_element(struct mlx5_esw_sched_node *vport_node,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct mlx5_esw_sched_node *parent = vport_node->parent;
|
||||
u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {};
|
||||
struct mlx5_core_dev *dev = vport_node->esw->dev;
|
||||
void *attr;
|
||||
@ -477,7 +480,7 @@ esw_qos_vport_create_sched_element(struct mlx5_esw_sched_node *vport_node,
|
||||
attr = MLX5_ADDR_OF(scheduling_context, sched_ctx, element_attributes);
|
||||
MLX5_SET(vport_element, attr, vport_number, vport_node->vport->vport);
|
||||
MLX5_SET(scheduling_context, sched_ctx, parent_element_id,
|
||||
vport_node->parent->ix);
|
||||
parent ? parent->ix : vport_node->esw->qos.root_tsar_ix);
|
||||
MLX5_SET(scheduling_context, sched_ctx, max_average_bw,
|
||||
vport_node->max_rate);
|
||||
|
||||
@ -608,10 +611,7 @@ static void
|
||||
esw_qos_tc_arbiter_get_bw_shares(struct mlx5_esw_sched_node *tc_arbiter_node,
|
||||
u32 *tc_bw)
|
||||
{
|
||||
struct mlx5_esw_sched_node *vports_tc_node;
|
||||
|
||||
list_for_each_entry(vports_tc_node, &tc_arbiter_node->children, entry)
|
||||
tc_bw[vports_tc_node->tc] = vports_tc_node->bw_share;
|
||||
memcpy(tc_bw, tc_arbiter_node->tc_bw, sizeof(tc_arbiter_node->tc_bw));
|
||||
}
|
||||
|
||||
static void
|
||||
@ -628,6 +628,7 @@ esw_qos_set_tc_arbiter_bw_shares(struct mlx5_esw_sched_node *tc_arbiter_node,
|
||||
u8 tc = vports_tc_node->tc;
|
||||
u32 bw_share;
|
||||
|
||||
tc_arbiter_node->tc_bw[tc] = tc_bw[tc];
|
||||
bw_share = tc_bw[tc] * fw_max_bw_share;
|
||||
bw_share = esw_qos_calc_bw_share(bw_share, divider,
|
||||
fw_max_bw_share);
|
||||
@ -786,48 +787,15 @@ static int esw_qos_create(struct mlx5_eswitch *esw, struct netlink_ext_ack *exta
|
||||
return err;
|
||||
}
|
||||
|
||||
if (MLX5_CAP_QOS(dev, log_esw_max_sched_depth)) {
|
||||
esw->qos.node0 = __esw_qos_create_vports_sched_node(esw, NULL, extack);
|
||||
} else {
|
||||
/* The eswitch doesn't support scheduling nodes.
|
||||
* Create a software-only node0 using the root TSAR to attach vport QoS to.
|
||||
*/
|
||||
if (!__esw_qos_alloc_node(esw,
|
||||
esw->qos.root_tsar_ix,
|
||||
SCHED_NODE_TYPE_VPORTS_TSAR,
|
||||
NULL))
|
||||
esw->qos.node0 = ERR_PTR(-ENOMEM);
|
||||
else
|
||||
list_add_tail(&esw->qos.node0->entry,
|
||||
&esw->qos.domain->nodes);
|
||||
}
|
||||
if (IS_ERR(esw->qos.node0)) {
|
||||
err = PTR_ERR(esw->qos.node0);
|
||||
esw_warn(dev, "E-Switch create rate node 0 failed (%d)\n", err);
|
||||
goto err_node0;
|
||||
}
|
||||
refcount_set(&esw->qos.refcnt, 1);
|
||||
|
||||
return 0;
|
||||
|
||||
err_node0:
|
||||
if (mlx5_destroy_scheduling_element_cmd(esw->dev, SCHEDULING_HIERARCHY_E_SWITCH,
|
||||
esw->qos.root_tsar_ix))
|
||||
esw_warn(esw->dev, "E-Switch destroy root TSAR failed.\n");
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void esw_qos_destroy(struct mlx5_eswitch *esw)
|
||||
{
|
||||
int err;
|
||||
|
||||
if (esw->qos.node0->ix != esw->qos.root_tsar_ix)
|
||||
__esw_qos_destroy_node(esw->qos.node0, NULL);
|
||||
else
|
||||
__esw_qos_free_node(esw->qos.node0);
|
||||
esw->qos.node0 = NULL;
|
||||
|
||||
err = mlx5_destroy_scheduling_element_cmd(esw->dev,
|
||||
SCHEDULING_HIERARCHY_E_SWITCH,
|
||||
esw->qos.root_tsar_ix);
|
||||
@ -990,13 +958,16 @@ esw_qos_vport_tc_enable(struct mlx5_vport *vport, enum sched_node_type type,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node;
|
||||
int err, new_level, max_level;
|
||||
struct mlx5_esw_sched_node *parent = vport_node->parent;
|
||||
int err;
|
||||
|
||||
if (type == SCHED_NODE_TYPE_TC_ARBITER_TSAR) {
|
||||
int new_level, max_level;
|
||||
|
||||
/* Increase the parent's level by 2 to account for both the
|
||||
* TC arbiter and the vports TC scheduling element.
|
||||
*/
|
||||
new_level = vport_node->parent->level + 2;
|
||||
new_level = (parent ? parent->level : 2) + 2;
|
||||
max_level = 1 << MLX5_CAP_QOS(vport_node->esw->dev,
|
||||
log_esw_max_sched_depth);
|
||||
if (new_level > max_level) {
|
||||
@ -1033,9 +1004,7 @@ esw_qos_vport_tc_enable(struct mlx5_vport *vport, enum sched_node_type type,
|
||||
err_sched_nodes:
|
||||
if (type == SCHED_NODE_TYPE_RATE_LIMITER) {
|
||||
esw_qos_node_destroy_sched_element(vport_node, NULL);
|
||||
list_add_tail(&vport_node->entry,
|
||||
&vport_node->parent->children);
|
||||
vport_node->level = vport_node->parent->level + 1;
|
||||
esw_qos_node_attach_to_parent(vport_node);
|
||||
} else {
|
||||
esw_qos_tc_arbiter_scheduling_teardown(vport_node, NULL);
|
||||
}
|
||||
@ -1083,7 +1052,6 @@ err_out:
|
||||
static void esw_qos_vport_disable(struct mlx5_vport *vport, struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node;
|
||||
struct mlx5_esw_sched_node *parent = vport_node->parent;
|
||||
enum sched_node_type curr_type = vport_node->type;
|
||||
|
||||
if (curr_type == SCHED_NODE_TYPE_VPORT)
|
||||
@ -1092,8 +1060,9 @@ static void esw_qos_vport_disable(struct mlx5_vport *vport, struct netlink_ext_a
|
||||
esw_qos_vport_tc_disable(vport, extack);
|
||||
|
||||
vport_node->bw_share = 0;
|
||||
memset(vport_node->tc_bw, 0, sizeof(vport_node->tc_bw));
|
||||
list_del_init(&vport_node->entry);
|
||||
esw_qos_normalize_min_rate(parent->esw, parent, extack);
|
||||
esw_qos_normalize_min_rate(vport_node->esw, vport_node->parent, extack);
|
||||
|
||||
trace_mlx5_esw_vport_qos_destroy(vport_node->esw->dev, vport);
|
||||
}
|
||||
@ -1103,25 +1072,23 @@ static int esw_qos_vport_enable(struct mlx5_vport *vport,
|
||||
struct mlx5_esw_sched_node *parent,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node;
|
||||
int err;
|
||||
|
||||
esw_assert_qos_lock_held(vport->dev->priv.eswitch);
|
||||
|
||||
esw_qos_node_set_parent(vport->qos.sched_node, parent);
|
||||
if (type == SCHED_NODE_TYPE_VPORT) {
|
||||
err = esw_qos_vport_create_sched_element(vport->qos.sched_node,
|
||||
extack);
|
||||
} else {
|
||||
esw_qos_node_set_parent(vport_node, parent);
|
||||
if (type == SCHED_NODE_TYPE_VPORT)
|
||||
err = esw_qos_vport_create_sched_element(vport_node, extack);
|
||||
else
|
||||
err = esw_qos_vport_tc_enable(vport, type, extack);
|
||||
}
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
vport->qos.sched_node->type = type;
|
||||
esw_qos_normalize_min_rate(parent->esw, parent, extack);
|
||||
trace_mlx5_esw_vport_qos_create(vport->dev, vport,
|
||||
vport->qos.sched_node->max_rate,
|
||||
vport->qos.sched_node->bw_share);
|
||||
vport_node->type = type;
|
||||
esw_qos_normalize_min_rate(vport_node->esw, parent, extack);
|
||||
trace_mlx5_esw_vport_qos_create(vport->dev, vport, vport_node->max_rate,
|
||||
vport_node->bw_share);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -1132,6 +1099,7 @@ static int mlx5_esw_qos_vport_enable(struct mlx5_vport *vport, enum sched_node_t
|
||||
{
|
||||
struct mlx5_eswitch *esw = vport->dev->priv.eswitch;
|
||||
struct mlx5_esw_sched_node *sched_node;
|
||||
struct mlx5_eswitch *parent_esw;
|
||||
int err;
|
||||
|
||||
esw_assert_qos_lock_held(esw);
|
||||
@ -1139,10 +1107,14 @@ static int mlx5_esw_qos_vport_enable(struct mlx5_vport *vport, enum sched_node_t
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
parent = parent ?: esw->qos.node0;
|
||||
sched_node = __esw_qos_alloc_node(parent->esw, 0, type, parent);
|
||||
if (!sched_node)
|
||||
parent_esw = parent ? parent->esw : esw;
|
||||
sched_node = __esw_qos_alloc_node(parent_esw, 0, type, parent);
|
||||
if (!sched_node) {
|
||||
esw_qos_put(esw);
|
||||
return -ENOMEM;
|
||||
}
|
||||
if (!parent)
|
||||
list_add_tail(&sched_node->entry, &esw->qos.domain->nodes);
|
||||
|
||||
sched_node->max_rate = max_rate;
|
||||
sched_node->min_rate = min_rate;
|
||||
@ -1150,6 +1122,7 @@ static int mlx5_esw_qos_vport_enable(struct mlx5_vport *vport, enum sched_node_t
|
||||
vport->qos.sched_node = sched_node;
|
||||
err = esw_qos_vport_enable(vport, type, parent, extack);
|
||||
if (err) {
|
||||
__esw_qos_free_node(sched_node);
|
||||
esw_qos_put(esw);
|
||||
vport->qos.sched_node = NULL;
|
||||
}
|
||||
@ -1157,6 +1130,19 @@ static int mlx5_esw_qos_vport_enable(struct mlx5_vport *vport, enum sched_node_t
|
||||
return err;
|
||||
}
|
||||
|
||||
static void mlx5_esw_qos_vport_disable_locked(struct mlx5_vport *vport)
|
||||
{
|
||||
struct mlx5_eswitch *esw = vport->dev->priv.eswitch;
|
||||
|
||||
esw_assert_qos_lock_held(esw);
|
||||
if (!vport->qos.sched_node)
|
||||
return;
|
||||
|
||||
esw_qos_vport_disable(vport, NULL);
|
||||
mlx5_esw_qos_vport_qos_free(vport);
|
||||
esw_qos_put(esw);
|
||||
}
|
||||
|
||||
void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport)
|
||||
{
|
||||
struct mlx5_eswitch *esw = vport->dev->priv.eswitch;
|
||||
@ -1168,11 +1154,9 @@ void mlx5_esw_qos_vport_disable(struct mlx5_vport *vport)
|
||||
goto unlock;
|
||||
|
||||
parent = vport->qos.sched_node->parent;
|
||||
WARN(parent != esw->qos.node0, "Disabling QoS on port before detaching it from node");
|
||||
WARN(parent, "Disabling QoS on port before detaching it from node");
|
||||
|
||||
esw_qos_vport_disable(vport, NULL);
|
||||
mlx5_esw_qos_vport_qos_free(vport);
|
||||
esw_qos_put(esw);
|
||||
mlx5_esw_qos_vport_disable_locked(vport);
|
||||
unlock:
|
||||
esw_qos_unlock(esw);
|
||||
}
|
||||
@ -1262,13 +1246,13 @@ static int esw_qos_vport_update(struct mlx5_vport *vport,
|
||||
struct mlx5_esw_sched_node *parent,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct mlx5_esw_sched_node *curr_parent = vport->qos.sched_node->parent;
|
||||
enum sched_node_type curr_type = vport->qos.sched_node->type;
|
||||
struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node;
|
||||
struct mlx5_esw_sched_node *curr_parent = vport_node->parent;
|
||||
enum sched_node_type curr_type = vport_node->type;
|
||||
u32 curr_tc_bw[DEVLINK_RATE_TCS_MAX] = {0};
|
||||
int err;
|
||||
|
||||
esw_assert_qos_lock_held(vport->dev->priv.eswitch);
|
||||
parent = parent ?: curr_parent;
|
||||
if (curr_type == type && curr_parent == parent)
|
||||
return 0;
|
||||
|
||||
@ -1276,10 +1260,8 @@ static int esw_qos_vport_update(struct mlx5_vport *vport,
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (curr_type == SCHED_NODE_TYPE_TC_ARBITER_TSAR && curr_type == type) {
|
||||
esw_qos_tc_arbiter_get_bw_shares(vport->qos.sched_node,
|
||||
curr_tc_bw);
|
||||
}
|
||||
if (curr_type == SCHED_NODE_TYPE_TC_ARBITER_TSAR && curr_type == type)
|
||||
esw_qos_tc_arbiter_get_bw_shares(vport_node, curr_tc_bw);
|
||||
|
||||
esw_qos_vport_disable(vport, extack);
|
||||
|
||||
@ -1290,8 +1272,8 @@ static int esw_qos_vport_update(struct mlx5_vport *vport,
|
||||
}
|
||||
|
||||
if (curr_type == SCHED_NODE_TYPE_TC_ARBITER_TSAR && curr_type == type) {
|
||||
esw_qos_set_tc_arbiter_bw_shares(vport->qos.sched_node,
|
||||
curr_tc_bw, extack);
|
||||
esw_qos_set_tc_arbiter_bw_shares(vport_node, curr_tc_bw,
|
||||
extack);
|
||||
}
|
||||
|
||||
return err;
|
||||
@ -1306,16 +1288,16 @@ static int esw_qos_vport_update_parent(struct mlx5_vport *vport, struct mlx5_esw
|
||||
|
||||
esw_assert_qos_lock_held(esw);
|
||||
curr_parent = vport->qos.sched_node->parent;
|
||||
parent = parent ?: esw->qos.node0;
|
||||
if (curr_parent == parent)
|
||||
return 0;
|
||||
|
||||
/* Set vport QoS type based on parent node type if different from
|
||||
* default QoS; otherwise, use the vport's current QoS type.
|
||||
*/
|
||||
if (parent->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR)
|
||||
if (parent && parent->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR)
|
||||
type = SCHED_NODE_TYPE_RATE_LIMITER;
|
||||
else if (curr_parent->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR)
|
||||
else if (curr_parent &&
|
||||
curr_parent->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR)
|
||||
type = SCHED_NODE_TYPE_VPORT;
|
||||
else
|
||||
type = vport->qos.sched_node->type;
|
||||
@ -1654,9 +1636,10 @@ static bool esw_qos_validate_unsupported_tc_bw(struct mlx5_eswitch *esw,
|
||||
static bool esw_qos_vport_validate_unsupported_tc_bw(struct mlx5_vport *vport,
|
||||
u32 *tc_bw)
|
||||
{
|
||||
struct mlx5_eswitch *esw = vport->qos.sched_node ?
|
||||
vport->qos.sched_node->parent->esw :
|
||||
vport->dev->priv.eswitch;
|
||||
struct mlx5_esw_sched_node *node = vport->qos.sched_node;
|
||||
struct mlx5_eswitch *esw = vport->dev->priv.eswitch;
|
||||
|
||||
esw = (node && node->parent) ? node->parent->esw : esw;
|
||||
|
||||
return esw_qos_validate_unsupported_tc_bw(esw, tc_bw);
|
||||
}
|
||||
@ -1673,6 +1656,21 @@ static bool esw_qos_tc_bw_disabled(u32 *tc_bw)
|
||||
return true;
|
||||
}
|
||||
|
||||
static void esw_vport_qos_prune_empty(struct mlx5_vport *vport)
|
||||
{
|
||||
struct mlx5_esw_sched_node *vport_node = vport->qos.sched_node;
|
||||
|
||||
esw_assert_qos_lock_held(vport->dev->priv.eswitch);
|
||||
if (!vport_node)
|
||||
return;
|
||||
|
||||
if (vport_node->parent || vport_node->max_rate ||
|
||||
vport_node->min_rate || !esw_qos_tc_bw_disabled(vport_node->tc_bw))
|
||||
return;
|
||||
|
||||
mlx5_esw_qos_vport_disable_locked(vport);
|
||||
}
|
||||
|
||||
int mlx5_esw_qos_init(struct mlx5_eswitch *esw)
|
||||
{
|
||||
if (esw->qos.domain)
|
||||
@ -1706,6 +1704,10 @@ int mlx5_esw_devlink_rate_leaf_tx_share_set(struct devlink_rate *rate_leaf, void
|
||||
|
||||
esw_qos_lock(esw);
|
||||
err = mlx5_esw_qos_set_vport_min_rate(vport, tx_share, extack);
|
||||
if (err)
|
||||
goto out;
|
||||
esw_vport_qos_prune_empty(vport);
|
||||
out:
|
||||
esw_qos_unlock(esw);
|
||||
return err;
|
||||
}
|
||||
@ -1727,6 +1729,10 @@ int mlx5_esw_devlink_rate_leaf_tx_max_set(struct devlink_rate *rate_leaf, void *
|
||||
|
||||
esw_qos_lock(esw);
|
||||
err = mlx5_esw_qos_set_vport_max_rate(vport, tx_max, extack);
|
||||
if (err)
|
||||
goto out;
|
||||
esw_vport_qos_prune_empty(vport);
|
||||
out:
|
||||
esw_qos_unlock(esw);
|
||||
return err;
|
||||
}
|
||||
@ -1763,7 +1769,8 @@ int mlx5_esw_devlink_rate_leaf_tc_bw_set(struct devlink_rate *rate_leaf,
|
||||
if (disable) {
|
||||
if (vport_node->type == SCHED_NODE_TYPE_TC_ARBITER_TSAR)
|
||||
err = esw_qos_vport_update(vport, SCHED_NODE_TYPE_VPORT,
|
||||
NULL, extack);
|
||||
vport_node->parent, extack);
|
||||
esw_vport_qos_prune_empty(vport);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
@ -1775,7 +1782,7 @@ int mlx5_esw_devlink_rate_leaf_tc_bw_set(struct devlink_rate *rate_leaf,
|
||||
} else {
|
||||
err = esw_qos_vport_update(vport,
|
||||
SCHED_NODE_TYPE_TC_ARBITER_TSAR,
|
||||
NULL, extack);
|
||||
vport_node->parent, extack);
|
||||
}
|
||||
if (!err)
|
||||
esw_qos_set_tc_arbiter_bw_shares(vport_node, tc_bw, extack);
|
||||
@ -1924,14 +1931,20 @@ int mlx5_esw_devlink_rate_leaf_parent_set(struct devlink_rate *devlink_rate,
|
||||
void *priv, void *parent_priv,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct mlx5_esw_sched_node *node;
|
||||
struct mlx5_esw_sched_node *node = parent ? parent_priv : NULL;
|
||||
struct mlx5_vport *vport = priv;
|
||||
int err;
|
||||
|
||||
if (!parent)
|
||||
return mlx5_esw_qos_vport_update_parent(vport, NULL, extack);
|
||||
err = mlx5_esw_qos_vport_update_parent(vport, node, extack);
|
||||
if (!err) {
|
||||
struct mlx5_eswitch *esw = vport->dev->priv.eswitch;
|
||||
|
||||
node = parent_priv;
|
||||
return mlx5_esw_qos_vport_update_parent(vport, node, extack);
|
||||
esw_qos_lock(esw);
|
||||
esw_vport_qos_prune_empty(vport);
|
||||
esw_qos_unlock(esw);
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static bool esw_qos_is_node_empty(struct mlx5_esw_sched_node *node)
|
||||
|
@ -373,11 +373,6 @@ struct mlx5_eswitch {
|
||||
refcount_t refcnt;
|
||||
u32 root_tsar_ix;
|
||||
struct mlx5_qos_domain *domain;
|
||||
/* Contains all vports with QoS enabled but no explicit node.
|
||||
* Cannot be NULL if QoS is enabled, but may be a fake node
|
||||
* referencing the root TSAR if the esw doesn't support nodes.
|
||||
*/
|
||||
struct mlx5_esw_sched_node *node0;
|
||||
} qos;
|
||||
|
||||
struct mlx5_esw_bridge_offloads *br_offloads;
|
||||
|
@ -367,6 +367,8 @@ int mlx5_query_port_dcbx_param(struct mlx5_core_dev *mdev, u32 *out);
|
||||
int mlx5_set_port_dcbx_param(struct mlx5_core_dev *mdev, u32 *in);
|
||||
int mlx5_set_trust_state(struct mlx5_core_dev *mdev, u8 trust_state);
|
||||
int mlx5_query_trust_state(struct mlx5_core_dev *mdev, u8 *trust_state);
|
||||
int mlx5_query_port_buffer_ownership(struct mlx5_core_dev *mdev,
|
||||
u8 *buffer_ownership);
|
||||
int mlx5_set_dscp2prio(struct mlx5_core_dev *mdev, u8 dscp, u8 prio);
|
||||
int mlx5_query_dscp2prio(struct mlx5_core_dev *mdev, u8 *dscp2prio);
|
||||
|
||||
|
@ -968,6 +968,26 @@ int mlx5_query_trust_state(struct mlx5_core_dev *mdev, u8 *trust_state)
|
||||
return err;
|
||||
}
|
||||
|
||||
int mlx5_query_port_buffer_ownership(struct mlx5_core_dev *mdev,
|
||||
u8 *buffer_ownership)
|
||||
{
|
||||
u32 out[MLX5_ST_SZ_DW(pfcc_reg)] = {};
|
||||
int err;
|
||||
|
||||
if (!MLX5_CAP_PCAM_FEATURE(mdev, buffer_ownership)) {
|
||||
*buffer_ownership = MLX5_BUF_OWNERSHIP_UNKNOWN;
|
||||
return 0;
|
||||
}
|
||||
|
||||
err = mlx5_query_pfcc_reg(mdev, out, sizeof(out));
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
*buffer_ownership = MLX5_GET(pfcc_reg, out, buf_ownership);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int mlx5_set_dscp2prio(struct mlx5_core_dev *mdev, u8 dscp, u8 prio)
|
||||
{
|
||||
int sz = MLX5_ST_SZ_BYTES(qpdpm_reg);
|
||||
|
@ -74,9 +74,9 @@ static void hws_bwc_matcher_init_attr(struct mlx5hws_bwc_matcher *bwc_matcher,
|
||||
static int
|
||||
hws_bwc_matcher_move_all_simple(struct mlx5hws_bwc_matcher *bwc_matcher)
|
||||
{
|
||||
bool move_error = false, poll_error = false, drain_error = false;
|
||||
struct mlx5hws_context *ctx = bwc_matcher->matcher->tbl->ctx;
|
||||
struct mlx5hws_matcher *matcher = bwc_matcher->matcher;
|
||||
int drain_error = 0, move_error = 0, poll_error = 0;
|
||||
u16 bwc_queues = mlx5hws_bwc_queues(ctx);
|
||||
struct mlx5hws_rule_attr rule_attr;
|
||||
struct mlx5hws_bwc_rule *bwc_rule;
|
||||
@ -84,6 +84,7 @@ hws_bwc_matcher_move_all_simple(struct mlx5hws_bwc_matcher *bwc_matcher)
|
||||
struct list_head *rules_list;
|
||||
u32 pending_rules;
|
||||
int i, ret = 0;
|
||||
bool drain;
|
||||
|
||||
mlx5hws_bwc_rule_fill_attr(bwc_matcher, 0, 0, &rule_attr);
|
||||
|
||||
@ -99,23 +100,37 @@ hws_bwc_matcher_move_all_simple(struct mlx5hws_bwc_matcher *bwc_matcher)
|
||||
ret = mlx5hws_matcher_resize_rule_move(matcher,
|
||||
bwc_rule->rule,
|
||||
&rule_attr);
|
||||
if (unlikely(ret && !move_error)) {
|
||||
if (unlikely(ret)) {
|
||||
if (!move_error) {
|
||||
mlx5hws_err(ctx,
|
||||
"Moving BWC rule: move failed (%d), attempting to move rest of the rules\n",
|
||||
ret);
|
||||
move_error = true;
|
||||
move_error = ret;
|
||||
}
|
||||
/* Rule wasn't queued, no need to poll */
|
||||
continue;
|
||||
}
|
||||
|
||||
pending_rules++;
|
||||
drain = pending_rules >=
|
||||
hws_bwc_get_burst_th(ctx, rule_attr.queue_id);
|
||||
ret = mlx5hws_bwc_queue_poll(ctx,
|
||||
rule_attr.queue_id,
|
||||
&pending_rules,
|
||||
false);
|
||||
if (unlikely(ret && !poll_error)) {
|
||||
drain);
|
||||
if (unlikely(ret)) {
|
||||
if (ret == -ETIMEDOUT) {
|
||||
mlx5hws_err(ctx,
|
||||
"Moving BWC rule: poll failed (%d), attempting to move rest of the rules\n",
|
||||
"Moving BWC rule: timeout polling for completions (%d), aborting rehash\n",
|
||||
ret);
|
||||
poll_error = true;
|
||||
return ret;
|
||||
}
|
||||
if (!poll_error) {
|
||||
mlx5hws_err(ctx,
|
||||
"Moving BWC rule: polling for completions failed (%d), attempting to move rest of the rules\n",
|
||||
ret);
|
||||
poll_error = ret;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -126,17 +141,30 @@ hws_bwc_matcher_move_all_simple(struct mlx5hws_bwc_matcher *bwc_matcher)
|
||||
rule_attr.queue_id,
|
||||
&pending_rules,
|
||||
true);
|
||||
if (unlikely(ret && !drain_error)) {
|
||||
if (unlikely(ret)) {
|
||||
if (ret == -ETIMEDOUT) {
|
||||
mlx5hws_err(ctx,
|
||||
"Moving BWC rule: drain failed (%d), attempting to move rest of the rules\n",
|
||||
"Moving bwc rule: timeout draining completions (%d), aborting rehash\n",
|
||||
ret);
|
||||
drain_error = true;
|
||||
return ret;
|
||||
}
|
||||
if (!drain_error) {
|
||||
mlx5hws_err(ctx,
|
||||
"Moving bwc rule: drain failed (%d), attempting to move rest of the rules\n",
|
||||
ret);
|
||||
drain_error = ret;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (move_error || poll_error || drain_error)
|
||||
ret = -EINVAL;
|
||||
/* Return the first error that happened */
|
||||
if (unlikely(move_error))
|
||||
return move_error;
|
||||
if (unlikely(poll_error))
|
||||
return poll_error;
|
||||
if (unlikely(drain_error))
|
||||
return drain_error;
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -1035,6 +1063,21 @@ int mlx5hws_bwc_rule_create_simple(struct mlx5hws_bwc_rule *bwc_rule,
|
||||
return 0; /* rule inserted successfully */
|
||||
}
|
||||
|
||||
/* Rule insertion could fail due to queue being full, timeout, or
|
||||
* matcher in resize. In such cases, no point in trying to rehash.
|
||||
*/
|
||||
if (ret == -EBUSY || ret == -ETIMEDOUT || ret == -EAGAIN) {
|
||||
mutex_unlock(queue_lock);
|
||||
mlx5hws_err(ctx,
|
||||
"BWC rule insertion failed - %s (%d)\n",
|
||||
ret == -EBUSY ? "queue is full" :
|
||||
ret == -ETIMEDOUT ? "timeout" :
|
||||
ret == -EAGAIN ? "matcher in resize" : "N/A",
|
||||
ret);
|
||||
hws_bwc_rule_cnt_dec(bwc_rule);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* At this point the rule wasn't added.
|
||||
* It could be because there was collision, or some other problem.
|
||||
* Try rehash by size and insert rule again - last chance.
|
||||
|
@ -1328,11 +1328,11 @@ mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher)
|
||||
{
|
||||
struct mlx5hws_context *ctx = bwc_matcher->matcher->tbl->ctx;
|
||||
struct mlx5hws_matcher *matcher = bwc_matcher->matcher;
|
||||
bool move_error = false, poll_error = false;
|
||||
u16 bwc_queues = mlx5hws_bwc_queues(ctx);
|
||||
struct mlx5hws_bwc_rule *tmp_bwc_rule;
|
||||
struct mlx5hws_rule_attr rule_attr;
|
||||
struct mlx5hws_table *isolated_tbl;
|
||||
int move_error = 0, poll_error = 0;
|
||||
struct mlx5hws_rule *tmp_rule;
|
||||
struct list_head *rules_list;
|
||||
u32 expected_completions = 1;
|
||||
@ -1391,11 +1391,15 @@ mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher)
|
||||
ret = mlx5hws_matcher_resize_rule_move(matcher,
|
||||
tmp_rule,
|
||||
&rule_attr);
|
||||
if (unlikely(ret && !move_error)) {
|
||||
if (unlikely(ret)) {
|
||||
if (!move_error) {
|
||||
mlx5hws_err(ctx,
|
||||
"Moving complex BWC rule failed (%d), attempting to move rest of the rules\n",
|
||||
"Moving complex BWC rule: move failed (%d), attempting to move rest of the rules\n",
|
||||
ret);
|
||||
move_error = true;
|
||||
move_error = ret;
|
||||
}
|
||||
/* Rule wasn't queued, no need to poll */
|
||||
continue;
|
||||
}
|
||||
|
||||
expected_completions = 1;
|
||||
@ -1403,11 +1407,19 @@ mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher)
|
||||
rule_attr.queue_id,
|
||||
&expected_completions,
|
||||
true);
|
||||
if (unlikely(ret && !poll_error)) {
|
||||
if (unlikely(ret)) {
|
||||
if (ret == -ETIMEDOUT) {
|
||||
mlx5hws_err(ctx,
|
||||
"Moving complex BWC rule: poll failed (%d), attempting to move rest of the rules\n",
|
||||
"Moving complex BWC rule: timeout polling for completions (%d), aborting rehash\n",
|
||||
ret);
|
||||
poll_error = true;
|
||||
return ret;
|
||||
}
|
||||
if (!poll_error) {
|
||||
mlx5hws_err(ctx,
|
||||
"Moving complex BWC rule: polling for completions failed (%d), attempting to move rest of the rules\n",
|
||||
ret);
|
||||
poll_error = ret;
|
||||
}
|
||||
}
|
||||
|
||||
/* Done moving the rule to the new matcher,
|
||||
@ -1422,8 +1434,11 @@ mlx5hws_bwc_matcher_move_all_complex(struct mlx5hws_bwc_matcher *bwc_matcher)
|
||||
}
|
||||
}
|
||||
|
||||
if (move_error || poll_error)
|
||||
ret = -EINVAL;
|
||||
/* Return the first error that happened */
|
||||
if (unlikely(move_error))
|
||||
return move_error;
|
||||
if (unlikely(poll_error))
|
||||
return poll_error;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -55,6 +55,7 @@ int mlx5hws_cmd_flow_table_create(struct mlx5_core_dev *mdev,
|
||||
|
||||
MLX5_SET(create_flow_table_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_TABLE);
|
||||
MLX5_SET(create_flow_table_in, in, table_type, ft_attr->type);
|
||||
MLX5_SET(create_flow_table_in, in, uid, ft_attr->uid);
|
||||
|
||||
ft_ctx = MLX5_ADDR_OF(create_flow_table_in, in, flow_table_context);
|
||||
MLX5_SET(flow_table_context, ft_ctx, level, ft_attr->level);
|
||||
|
@ -36,6 +36,7 @@ struct mlx5hws_cmd_set_fte_attr {
|
||||
struct mlx5hws_cmd_ft_create_attr {
|
||||
u8 type;
|
||||
u8 level;
|
||||
u16 uid;
|
||||
bool rtc_valid;
|
||||
bool decap_en;
|
||||
bool reformat_en;
|
||||
|
@ -267,6 +267,7 @@ static int mlx5_cmd_hws_create_flow_table(struct mlx5_flow_root_namespace *ns,
|
||||
|
||||
tbl_attr.type = MLX5HWS_TABLE_TYPE_FDB;
|
||||
tbl_attr.level = ft_attr->level;
|
||||
tbl_attr.uid = ft_attr->uid;
|
||||
tbl = mlx5hws_table_create(ctx, &tbl_attr);
|
||||
if (!tbl) {
|
||||
mlx5_core_err(ns->dev, "Failed creating hws flow_table\n");
|
||||
|
@ -85,6 +85,7 @@ static int hws_matcher_create_end_ft_isolated(struct mlx5hws_matcher *matcher)
|
||||
|
||||
ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev,
|
||||
tbl,
|
||||
0,
|
||||
&matcher->end_ft_id);
|
||||
if (ret) {
|
||||
mlx5hws_err(tbl->ctx, "Isolated matcher: failed to create end flow table\n");
|
||||
@ -112,7 +113,9 @@ static int hws_matcher_create_end_ft(struct mlx5hws_matcher *matcher)
|
||||
if (mlx5hws_matcher_is_isolated(matcher))
|
||||
ret = hws_matcher_create_end_ft_isolated(matcher);
|
||||
else
|
||||
ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, tbl,
|
||||
ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev,
|
||||
tbl,
|
||||
0,
|
||||
&matcher->end_ft_id);
|
||||
|
||||
if (ret) {
|
||||
|
@ -75,6 +75,7 @@ struct mlx5hws_context_attr {
|
||||
struct mlx5hws_table_attr {
|
||||
enum mlx5hws_table_type type;
|
||||
u32 level;
|
||||
u16 uid;
|
||||
};
|
||||
|
||||
enum mlx5hws_matcher_flow_src {
|
||||
|
@ -964,7 +964,6 @@ static int hws_send_ring_open_cq(struct mlx5_core_dev *mdev,
|
||||
return -ENOMEM;
|
||||
|
||||
MLX5_SET(cqc, cqc_data, uar_page, mdev->priv.uar->index);
|
||||
MLX5_SET(cqc, cqc_data, cqe_sz, queue->num_entries);
|
||||
MLX5_SET(cqc, cqc_data, log_cq_size, ilog2(queue->num_entries));
|
||||
|
||||
err = hws_send_ring_alloc_cq(mdev, numa_node, queue, cqc_data, cq);
|
||||
|
@ -9,6 +9,7 @@ u32 mlx5hws_table_get_id(struct mlx5hws_table *tbl)
|
||||
}
|
||||
|
||||
static void hws_table_init_next_ft_attr(struct mlx5hws_table *tbl,
|
||||
u16 uid,
|
||||
struct mlx5hws_cmd_ft_create_attr *ft_attr)
|
||||
{
|
||||
ft_attr->type = tbl->fw_ft_type;
|
||||
@ -16,7 +17,9 @@ static void hws_table_init_next_ft_attr(struct mlx5hws_table *tbl,
|
||||
ft_attr->level = tbl->ctx->caps->fdb_ft.max_level - 1;
|
||||
else
|
||||
ft_attr->level = tbl->ctx->caps->nic_ft.max_level - 1;
|
||||
|
||||
ft_attr->rtc_valid = true;
|
||||
ft_attr->uid = uid;
|
||||
}
|
||||
|
||||
static void hws_table_set_cap_attr(struct mlx5hws_table *tbl,
|
||||
@ -119,12 +122,12 @@ static int hws_table_connect_to_default_miss_tbl(struct mlx5hws_table *tbl, u32
|
||||
|
||||
int mlx5hws_table_create_default_ft(struct mlx5_core_dev *mdev,
|
||||
struct mlx5hws_table *tbl,
|
||||
u32 *ft_id)
|
||||
u16 uid, u32 *ft_id)
|
||||
{
|
||||
struct mlx5hws_cmd_ft_create_attr ft_attr = {0};
|
||||
int ret;
|
||||
|
||||
hws_table_init_next_ft_attr(tbl, &ft_attr);
|
||||
hws_table_init_next_ft_attr(tbl, uid, &ft_attr);
|
||||
hws_table_set_cap_attr(tbl, &ft_attr);
|
||||
|
||||
ret = mlx5hws_cmd_flow_table_create(mdev, &ft_attr, ft_id);
|
||||
@ -189,7 +192,10 @@ static int hws_table_init(struct mlx5hws_table *tbl)
|
||||
}
|
||||
|
||||
mutex_lock(&ctx->ctrl_lock);
|
||||
ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, tbl, &tbl->ft_id);
|
||||
ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev,
|
||||
tbl,
|
||||
tbl->uid,
|
||||
&tbl->ft_id);
|
||||
if (ret) {
|
||||
mlx5hws_err(tbl->ctx, "Failed to create flow table object\n");
|
||||
mutex_unlock(&ctx->ctrl_lock);
|
||||
@ -239,6 +245,7 @@ struct mlx5hws_table *mlx5hws_table_create(struct mlx5hws_context *ctx,
|
||||
tbl->ctx = ctx;
|
||||
tbl->type = attr->type;
|
||||
tbl->level = attr->level;
|
||||
tbl->uid = attr->uid;
|
||||
|
||||
ret = hws_table_init(tbl);
|
||||
if (ret) {
|
||||
|
@ -18,6 +18,7 @@ struct mlx5hws_table {
|
||||
enum mlx5hws_table_type type;
|
||||
u32 fw_ft_type;
|
||||
u32 level;
|
||||
u16 uid;
|
||||
struct list_head matchers_list;
|
||||
struct list_head tbl_list_node;
|
||||
struct mlx5hws_default_miss default_miss;
|
||||
@ -47,7 +48,7 @@ u32 mlx5hws_table_get_res_fw_ft_type(enum mlx5hws_table_type tbl_type,
|
||||
|
||||
int mlx5hws_table_create_default_ft(struct mlx5_core_dev *mdev,
|
||||
struct mlx5hws_table *tbl,
|
||||
u32 *ft_id);
|
||||
u16 uid, u32 *ft_id);
|
||||
|
||||
void mlx5hws_table_destroy_default_ft(struct mlx5hws_table *tbl,
|
||||
u32 ft_id);
|
||||
|
@ -2375,6 +2375,8 @@ static const struct mlxsw_listener mlxsw_sp_listener[] = {
|
||||
ROUTER_EXP, false),
|
||||
MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_DIP_LINK_LOCAL, FORWARD,
|
||||
ROUTER_EXP, false),
|
||||
MLXSW_SP_RXL_NO_MARK(DISCARD_ING_ROUTER_SIP_LINK_LOCAL, FORWARD,
|
||||
ROUTER_EXP, false),
|
||||
/* Multicast Router Traps */
|
||||
MLXSW_SP_RXL_MARK(ACL1, TRAP_TO_CPU, MULTICAST, false),
|
||||
MLXSW_SP_RXL_L3_MARK(ACL2, TRAP_TO_CPU, MULTICAST, false),
|
||||
|
@ -94,6 +94,7 @@ enum {
|
||||
MLXSW_TRAP_ID_DISCARD_ING_ROUTER_IPV4_SIP_BC = 0x16A,
|
||||
MLXSW_TRAP_ID_DISCARD_ING_ROUTER_IPV4_DIP_LOCAL_NET = 0x16B,
|
||||
MLXSW_TRAP_ID_DISCARD_ING_ROUTER_DIP_LINK_LOCAL = 0x16C,
|
||||
MLXSW_TRAP_ID_DISCARD_ING_ROUTER_SIP_LINK_LOCAL = 0x16D,
|
||||
MLXSW_TRAP_ID_DISCARD_ROUTER_IRIF_EN = 0x178,
|
||||
MLXSW_TRAP_ID_DISCARD_ROUTER_ERIF_EN = 0x179,
|
||||
MLXSW_TRAP_ID_DISCARD_ROUTER_LPM4 = 0x17B,
|
||||
|
@ -32,6 +32,10 @@
|
||||
/* MAC Specific Addr 1 Top Reg */
|
||||
#define LAN865X_REG_MAC_H_SADDR1 0x00010023
|
||||
|
||||
/* MAC TSU Timer Increment Register */
|
||||
#define LAN865X_REG_MAC_TSU_TIMER_INCR 0x00010077
|
||||
#define MAC_TSU_TIMER_INCR_COUNT_NANOSECONDS 0x0028
|
||||
|
||||
struct lan865x_priv {
|
||||
struct work_struct multicast_work;
|
||||
struct net_device *netdev;
|
||||
@ -311,6 +315,8 @@ static int lan865x_net_open(struct net_device *netdev)
|
||||
|
||||
phy_start(netdev->phydev);
|
||||
|
||||
netif_start_queue(netdev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -344,6 +350,21 @@ static int lan865x_probe(struct spi_device *spi)
|
||||
goto free_netdev;
|
||||
}
|
||||
|
||||
/* LAN865x Rev.B0/B1 configuration parameters from AN1760
|
||||
* As per the Configuration Application Note AN1760 published in the
|
||||
* link, https://www.microchip.com/en-us/application-notes/an1760
|
||||
* Revision F (DS60001760G - June 2024), configure the MAC to set time
|
||||
* stamping at the end of the Start of Frame Delimiter (SFD) and set the
|
||||
* Timer Increment reg to 40 ns to be used as a 25 MHz internal clock.
|
||||
*/
|
||||
ret = oa_tc6_write_register(priv->tc6, LAN865X_REG_MAC_TSU_TIMER_INCR,
|
||||
MAC_TSU_TIMER_INCR_COUNT_NANOSECONDS);
|
||||
if (ret) {
|
||||
dev_err(&spi->dev, "Failed to config TSU Timer Incr reg: %d\n",
|
||||
ret);
|
||||
goto oa_tc6_exit;
|
||||
}
|
||||
|
||||
/* As per the point s3 in the below errata, SPI receive Ethernet frame
|
||||
* transfer may halt when starting the next frame in the same data block
|
||||
* (chunk) as the end of a previous frame. The RFA field should be
|
||||
|
@ -241,7 +241,7 @@ union rtase_rx_desc {
|
||||
#define RTASE_RX_RES BIT(20)
|
||||
#define RTASE_RX_RUNT BIT(19)
|
||||
#define RTASE_RX_RWT BIT(18)
|
||||
#define RTASE_RX_CRC BIT(16)
|
||||
#define RTASE_RX_CRC BIT(17)
|
||||
#define RTASE_RX_V6F BIT(31)
|
||||
#define RTASE_RX_V4F BIT(30)
|
||||
#define RTASE_RX_UDPT BIT(29)
|
||||
|
@ -152,7 +152,7 @@ static int thead_set_clk_tx_rate(void *bsp_priv, struct clk *clk_tx_i,
|
||||
static int thead_dwmac_enable_clk(struct plat_stmmacenet_data *plat)
|
||||
{
|
||||
struct thead_dwmac *dwmac = plat->bsp_priv;
|
||||
u32 reg;
|
||||
u32 reg, div;
|
||||
|
||||
switch (plat->mac_interface) {
|
||||
case PHY_INTERFACE_MODE_MII:
|
||||
@ -164,6 +164,13 @@ static int thead_dwmac_enable_clk(struct plat_stmmacenet_data *plat)
|
||||
case PHY_INTERFACE_MODE_RGMII_RXID:
|
||||
case PHY_INTERFACE_MODE_RGMII_TXID:
|
||||
/* use pll */
|
||||
div = clk_get_rate(plat->stmmac_clk) / rgmii_clock(SPEED_1000);
|
||||
reg = FIELD_PREP(GMAC_PLLCLK_DIV_EN, 1) |
|
||||
FIELD_PREP(GMAC_PLLCLK_DIV_NUM, div);
|
||||
|
||||
writel(0, dwmac->apb_base + GMAC_PLLCLK_DIV);
|
||||
writel(reg, dwmac->apb_base + GMAC_PLLCLK_DIV);
|
||||
|
||||
writel(GMAC_GTXCLK_SEL_PLL, dwmac->apb_base + GMAC_GTXCLK_SEL);
|
||||
reg = GMAC_TX_CLK_EN | GMAC_TX_CLK_N_EN | GMAC_TX_CLK_OUT_EN |
|
||||
GMAC_RX_CLK_EN | GMAC_RX_CLK_N_EN;
|
||||
|
@ -203,6 +203,44 @@ static void prueth_emac_stop(struct prueth *prueth)
|
||||
}
|
||||
}
|
||||
|
||||
static void icssg_enable_fw_offload(struct prueth *prueth)
|
||||
{
|
||||
struct prueth_emac *emac;
|
||||
int mac;
|
||||
|
||||
for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) {
|
||||
emac = prueth->emac[mac];
|
||||
if (prueth->is_hsr_offload_mode) {
|
||||
if (emac->ndev->features & NETIF_F_HW_HSR_TAG_RM)
|
||||
icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_ENABLE);
|
||||
else
|
||||
icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_DISABLE);
|
||||
}
|
||||
|
||||
if (prueth->is_switch_mode || prueth->is_hsr_offload_mode) {
|
||||
if (netif_running(emac->ndev)) {
|
||||
icssg_fdb_add_del(emac, eth_stp_addr, prueth->default_vlan,
|
||||
ICSSG_FDB_ENTRY_P0_MEMBERSHIP |
|
||||
ICSSG_FDB_ENTRY_P1_MEMBERSHIP |
|
||||
ICSSG_FDB_ENTRY_P2_MEMBERSHIP |
|
||||
ICSSG_FDB_ENTRY_BLOCK,
|
||||
true);
|
||||
icssg_vtbl_modify(emac, emac->port_vlan | DEFAULT_VID,
|
||||
BIT(emac->port_id) | DEFAULT_PORT_MASK,
|
||||
BIT(emac->port_id) | DEFAULT_UNTAG_MASK,
|
||||
true);
|
||||
if (prueth->is_hsr_offload_mode)
|
||||
icssg_vtbl_modify(emac, DEFAULT_VID,
|
||||
DEFAULT_PORT_MASK,
|
||||
DEFAULT_UNTAG_MASK, true);
|
||||
icssg_set_pvid(prueth, emac->port_vlan, emac->port_id);
|
||||
if (prueth->is_switch_mode)
|
||||
icssg_set_port_state(emac, ICSSG_EMAC_PORT_VLAN_AWARE_ENABLE);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int prueth_emac_common_start(struct prueth *prueth)
|
||||
{
|
||||
struct prueth_emac *emac;
|
||||
@ -753,6 +791,7 @@ static int emac_ndo_open(struct net_device *ndev)
|
||||
ret = prueth_emac_common_start(prueth);
|
||||
if (ret)
|
||||
goto free_rx_irq;
|
||||
icssg_enable_fw_offload(prueth);
|
||||
}
|
||||
|
||||
flow_cfg = emac->dram.va + ICSSG_CONFIG_OFFSET + PSI_L_REGULAR_FLOW_ID_BASE_OFFSET;
|
||||
@ -1360,8 +1399,7 @@ static int prueth_emac_restart(struct prueth *prueth)
|
||||
|
||||
static void icssg_change_mode(struct prueth *prueth)
|
||||
{
|
||||
struct prueth_emac *emac;
|
||||
int mac, ret;
|
||||
int ret;
|
||||
|
||||
ret = prueth_emac_restart(prueth);
|
||||
if (ret) {
|
||||
@ -1369,35 +1407,7 @@ static void icssg_change_mode(struct prueth *prueth)
|
||||
return;
|
||||
}
|
||||
|
||||
for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) {
|
||||
emac = prueth->emac[mac];
|
||||
if (prueth->is_hsr_offload_mode) {
|
||||
if (emac->ndev->features & NETIF_F_HW_HSR_TAG_RM)
|
||||
icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_ENABLE);
|
||||
else
|
||||
icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_DISABLE);
|
||||
}
|
||||
|
||||
if (netif_running(emac->ndev)) {
|
||||
icssg_fdb_add_del(emac, eth_stp_addr, prueth->default_vlan,
|
||||
ICSSG_FDB_ENTRY_P0_MEMBERSHIP |
|
||||
ICSSG_FDB_ENTRY_P1_MEMBERSHIP |
|
||||
ICSSG_FDB_ENTRY_P2_MEMBERSHIP |
|
||||
ICSSG_FDB_ENTRY_BLOCK,
|
||||
true);
|
||||
icssg_vtbl_modify(emac, emac->port_vlan | DEFAULT_VID,
|
||||
BIT(emac->port_id) | DEFAULT_PORT_MASK,
|
||||
BIT(emac->port_id) | DEFAULT_UNTAG_MASK,
|
||||
true);
|
||||
if (prueth->is_hsr_offload_mode)
|
||||
icssg_vtbl_modify(emac, DEFAULT_VID,
|
||||
DEFAULT_PORT_MASK,
|
||||
DEFAULT_UNTAG_MASK, true);
|
||||
icssg_set_pvid(prueth, emac->port_vlan, emac->port_id);
|
||||
if (prueth->is_switch_mode)
|
||||
icssg_set_port_state(emac, ICSSG_EMAC_PORT_VLAN_AWARE_ENABLE);
|
||||
}
|
||||
}
|
||||
icssg_enable_fw_offload(prueth);
|
||||
}
|
||||
|
||||
static int prueth_netdevice_port_link(struct net_device *ndev,
|
||||
|
@ -192,7 +192,7 @@ void wx_setup_vfmrqc_vf(struct wx *wx)
|
||||
u8 i, j;
|
||||
|
||||
/* Fill out hash function seeds */
|
||||
netdev_rss_key_fill(wx->rss_key, sizeof(wx->rss_key));
|
||||
netdev_rss_key_fill(wx->rss_key, WX_RSS_KEY_SIZE);
|
||||
for (i = 0; i < WX_RSS_KEY_SIZE / 4; i++)
|
||||
wr32(wx, WX_VXRSSRK(i), wx->rss_key[i]);
|
||||
|
||||
|
@ -1160,6 +1160,7 @@ static void axienet_dma_rx_cb(void *data, const struct dmaengine_result *result)
|
||||
struct axienet_local *lp = data;
|
||||
struct sk_buff *skb;
|
||||
u32 *app_metadata;
|
||||
int i;
|
||||
|
||||
skbuf_dma = axienet_get_rx_desc(lp, lp->rx_ring_tail++);
|
||||
skb = skbuf_dma->skb;
|
||||
@ -1178,6 +1179,9 @@ static void axienet_dma_rx_cb(void *data, const struct dmaengine_result *result)
|
||||
u64_stats_add(&lp->rx_packets, 1);
|
||||
u64_stats_add(&lp->rx_bytes, rx_len);
|
||||
u64_stats_update_end(&lp->rx_stat_sync);
|
||||
|
||||
for (i = 0; i < CIRC_SPACE(lp->rx_ring_head, lp->rx_ring_tail,
|
||||
RX_BUF_NUM_DEFAULT); i++)
|
||||
axienet_rx_submit_desc(lp->ndev);
|
||||
dma_async_issue_pending(lp->rx_chan);
|
||||
}
|
||||
@ -1457,7 +1461,6 @@ static void axienet_rx_submit_desc(struct net_device *ndev)
|
||||
if (!skbuf_dma)
|
||||
return;
|
||||
|
||||
lp->rx_ring_head++;
|
||||
skb = netdev_alloc_skb(ndev, lp->max_frm_size);
|
||||
if (!skb)
|
||||
return;
|
||||
@ -1482,6 +1485,7 @@ static void axienet_rx_submit_desc(struct net_device *ndev)
|
||||
skbuf_dma->desc = dma_rx_desc;
|
||||
dma_rx_desc->callback_param = lp;
|
||||
dma_rx_desc->callback_result = axienet_dma_rx_cb;
|
||||
lp->rx_ring_head++;
|
||||
dmaengine_submit(dma_rx_desc);
|
||||
|
||||
return;
|
||||
|
@ -362,6 +362,13 @@ struct vsc85xx_hw_stat {
|
||||
u16 mask;
|
||||
};
|
||||
|
||||
struct vsc8531_skb_cb {
|
||||
u32 ns;
|
||||
};
|
||||
|
||||
#define VSC8531_SKB_CB(skb) \
|
||||
((struct vsc8531_skb_cb *)((skb)->cb))
|
||||
|
||||
struct vsc8531_private {
|
||||
int rate_magic;
|
||||
u16 supp_led_modes;
|
||||
@ -410,6 +417,11 @@ struct vsc8531_private {
|
||||
*/
|
||||
struct mutex ts_lock;
|
||||
struct mutex phc_lock;
|
||||
|
||||
/* list of skbs that were received and need timestamp information but it
|
||||
* didn't received it yet
|
||||
*/
|
||||
struct sk_buff_head rx_skbs_list;
|
||||
};
|
||||
|
||||
/* Shared structure between the PHYs of the same package.
|
||||
|
@ -2335,6 +2335,13 @@ static int vsc85xx_probe(struct phy_device *phydev)
|
||||
return vsc85xx_dt_led_modes_get(phydev, default_mode);
|
||||
}
|
||||
|
||||
static void vsc85xx_remove(struct phy_device *phydev)
|
||||
{
|
||||
struct vsc8531_private *priv = phydev->priv;
|
||||
|
||||
skb_queue_purge(&priv->rx_skbs_list);
|
||||
}
|
||||
|
||||
/* Microsemi VSC85xx PHYs */
|
||||
static struct phy_driver vsc85xx_driver[] = {
|
||||
{
|
||||
@ -2589,6 +2596,7 @@ static struct phy_driver vsc85xx_driver[] = {
|
||||
.config_intr = &vsc85xx_config_intr,
|
||||
.suspend = &genphy_suspend,
|
||||
.resume = &genphy_resume,
|
||||
.remove = &vsc85xx_remove,
|
||||
.probe = &vsc8574_probe,
|
||||
.set_wol = &vsc85xx_wol_set,
|
||||
.get_wol = &vsc85xx_wol_get,
|
||||
@ -2614,6 +2622,7 @@ static struct phy_driver vsc85xx_driver[] = {
|
||||
.config_intr = &vsc85xx_config_intr,
|
||||
.suspend = &genphy_suspend,
|
||||
.resume = &genphy_resume,
|
||||
.remove = &vsc85xx_remove,
|
||||
.probe = &vsc8574_probe,
|
||||
.set_wol = &vsc85xx_wol_set,
|
||||
.get_wol = &vsc85xx_wol_get,
|
||||
@ -2639,6 +2648,7 @@ static struct phy_driver vsc85xx_driver[] = {
|
||||
.config_intr = &vsc85xx_config_intr,
|
||||
.suspend = &genphy_suspend,
|
||||
.resume = &genphy_resume,
|
||||
.remove = &vsc85xx_remove,
|
||||
.probe = &vsc8584_probe,
|
||||
.get_tunable = &vsc85xx_get_tunable,
|
||||
.set_tunable = &vsc85xx_set_tunable,
|
||||
@ -2662,6 +2672,7 @@ static struct phy_driver vsc85xx_driver[] = {
|
||||
.config_intr = &vsc85xx_config_intr,
|
||||
.suspend = &genphy_suspend,
|
||||
.resume = &genphy_resume,
|
||||
.remove = &vsc85xx_remove,
|
||||
.probe = &vsc8584_probe,
|
||||
.get_tunable = &vsc85xx_get_tunable,
|
||||
.set_tunable = &vsc85xx_set_tunable,
|
||||
@ -2685,6 +2696,7 @@ static struct phy_driver vsc85xx_driver[] = {
|
||||
.config_intr = &vsc85xx_config_intr,
|
||||
.suspend = &genphy_suspend,
|
||||
.resume = &genphy_resume,
|
||||
.remove = &vsc85xx_remove,
|
||||
.probe = &vsc8584_probe,
|
||||
.get_tunable = &vsc85xx_get_tunable,
|
||||
.set_tunable = &vsc85xx_set_tunable,
|
||||
|
@ -1194,9 +1194,7 @@ static bool vsc85xx_rxtstamp(struct mii_timestamper *mii_ts,
|
||||
{
|
||||
struct vsc8531_private *vsc8531 =
|
||||
container_of(mii_ts, struct vsc8531_private, mii_ts);
|
||||
struct skb_shared_hwtstamps *shhwtstamps = NULL;
|
||||
struct vsc85xx_ptphdr *ptphdr;
|
||||
struct timespec64 ts;
|
||||
unsigned long ns;
|
||||
|
||||
if (!vsc8531->ptp->configured)
|
||||
@ -1206,27 +1204,52 @@ static bool vsc85xx_rxtstamp(struct mii_timestamper *mii_ts,
|
||||
type == PTP_CLASS_NONE)
|
||||
return false;
|
||||
|
||||
vsc85xx_gettime(&vsc8531->ptp->caps, &ts);
|
||||
|
||||
ptphdr = get_ptp_header_rx(skb, vsc8531->ptp->rx_filter);
|
||||
if (!ptphdr)
|
||||
return false;
|
||||
|
||||
shhwtstamps = skb_hwtstamps(skb);
|
||||
memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps));
|
||||
|
||||
ns = ntohl(ptphdr->rsrvd2);
|
||||
|
||||
/* nsec is in reserved field */
|
||||
if (ts.tv_nsec < ns)
|
||||
ts.tv_sec--;
|
||||
VSC8531_SKB_CB(skb)->ns = ns;
|
||||
skb_queue_tail(&vsc8531->rx_skbs_list, skb);
|
||||
|
||||
shhwtstamps->hwtstamp = ktime_set(ts.tv_sec, ns);
|
||||
netif_rx(skb);
|
||||
ptp_schedule_worker(vsc8531->ptp->ptp_clock, 0);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static long vsc85xx_do_aux_work(struct ptp_clock_info *info)
|
||||
{
|
||||
struct vsc85xx_ptp *ptp = container_of(info, struct vsc85xx_ptp, caps);
|
||||
struct skb_shared_hwtstamps *shhwtstamps = NULL;
|
||||
struct phy_device *phydev = ptp->phydev;
|
||||
struct vsc8531_private *priv = phydev->priv;
|
||||
struct sk_buff_head received;
|
||||
struct sk_buff *rx_skb;
|
||||
struct timespec64 ts;
|
||||
unsigned long flags;
|
||||
|
||||
__skb_queue_head_init(&received);
|
||||
spin_lock_irqsave(&priv->rx_skbs_list.lock, flags);
|
||||
skb_queue_splice_tail_init(&priv->rx_skbs_list, &received);
|
||||
spin_unlock_irqrestore(&priv->rx_skbs_list.lock, flags);
|
||||
|
||||
vsc85xx_gettime(info, &ts);
|
||||
while ((rx_skb = __skb_dequeue(&received)) != NULL) {
|
||||
shhwtstamps = skb_hwtstamps(rx_skb);
|
||||
memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps));
|
||||
|
||||
if (ts.tv_nsec < VSC8531_SKB_CB(rx_skb)->ns)
|
||||
ts.tv_sec--;
|
||||
|
||||
shhwtstamps->hwtstamp = ktime_set(ts.tv_sec,
|
||||
VSC8531_SKB_CB(rx_skb)->ns);
|
||||
netif_rx(rx_skb);
|
||||
}
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
static const struct ptp_clock_info vsc85xx_clk_caps = {
|
||||
.owner = THIS_MODULE,
|
||||
.name = "VSC85xx timer",
|
||||
@ -1240,6 +1263,7 @@ static const struct ptp_clock_info vsc85xx_clk_caps = {
|
||||
.adjfine = &vsc85xx_adjfine,
|
||||
.gettime64 = &vsc85xx_gettime,
|
||||
.settime64 = &vsc85xx_settime,
|
||||
.do_aux_work = &vsc85xx_do_aux_work,
|
||||
};
|
||||
|
||||
static struct vsc8531_private *vsc8584_base_priv(struct phy_device *phydev)
|
||||
@ -1567,6 +1591,7 @@ int vsc8584_ptp_probe(struct phy_device *phydev)
|
||||
|
||||
mutex_init(&vsc8531->phc_lock);
|
||||
mutex_init(&vsc8531->ts_lock);
|
||||
skb_queue_head_init(&vsc8531->rx_skbs_list);
|
||||
|
||||
/* Retrieve the shared load/save GPIO. Request it as non exclusive as
|
||||
* the same GPIO can be requested by all the PHYs of the same package.
|
||||
|
@ -33,6 +33,7 @@
|
||||
#include <linux/ppp_channel.h>
|
||||
#include <linux/ppp-comp.h>
|
||||
#include <linux/skbuff.h>
|
||||
#include <linux/rculist.h>
|
||||
#include <linux/rtnetlink.h>
|
||||
#include <linux/if_arp.h>
|
||||
#include <linux/ip.h>
|
||||
@ -1598,11 +1599,14 @@ static int ppp_fill_forward_path(struct net_device_path_ctx *ctx,
|
||||
if (ppp->flags & SC_MULTILINK)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (list_empty(&ppp->channels))
|
||||
pch = list_first_or_null_rcu(&ppp->channels, struct channel, clist);
|
||||
if (!pch)
|
||||
return -ENODEV;
|
||||
|
||||
chan = READ_ONCE(pch->chan);
|
||||
if (!chan)
|
||||
return -ENODEV;
|
||||
|
||||
pch = list_first_entry(&ppp->channels, struct channel, clist);
|
||||
chan = pch->chan;
|
||||
if (!chan->ops->fill_forward_path)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
@ -2994,7 +2998,7 @@ ppp_unregister_channel(struct ppp_channel *chan)
|
||||
*/
|
||||
down_write(&pch->chan_sem);
|
||||
spin_lock_bh(&pch->downl);
|
||||
pch->chan = NULL;
|
||||
WRITE_ONCE(pch->chan, NULL);
|
||||
spin_unlock_bh(&pch->downl);
|
||||
up_write(&pch->chan_sem);
|
||||
ppp_disconnect_channel(pch);
|
||||
@ -3515,7 +3519,7 @@ ppp_connect_channel(struct channel *pch, int unit)
|
||||
hdrlen = pch->file.hdrlen + 2; /* for protocol bytes */
|
||||
if (hdrlen > ppp->dev->hard_header_len)
|
||||
ppp->dev->hard_header_len = hdrlen;
|
||||
list_add_tail(&pch->clist, &ppp->channels);
|
||||
list_add_tail_rcu(&pch->clist, &ppp->channels);
|
||||
++ppp->n_channels;
|
||||
pch->ppp = ppp;
|
||||
refcount_inc(&ppp->file.refcnt);
|
||||
@ -3545,10 +3549,11 @@ ppp_disconnect_channel(struct channel *pch)
|
||||
if (ppp) {
|
||||
/* remove it from the ppp unit's list */
|
||||
ppp_lock(ppp);
|
||||
list_del(&pch->clist);
|
||||
list_del_rcu(&pch->clist);
|
||||
if (--ppp->n_channels == 0)
|
||||
wake_up_interruptible(&ppp->file.rwait);
|
||||
ppp_unlock(ppp);
|
||||
synchronize_net();
|
||||
if (refcount_dec_and_test(&ppp->file.refcnt))
|
||||
ppp_destroy_interface(ppp);
|
||||
err = 0;
|
||||
|
@ -1041,6 +1041,10 @@ pd692x0_configure_managers(struct pd692x0_priv *priv, int nmanagers)
|
||||
int pw_budget;
|
||||
|
||||
pw_budget = regulator_get_unclaimed_power_budget(supply);
|
||||
if (!pw_budget)
|
||||
/* Do nothing if no power budget */
|
||||
continue;
|
||||
|
||||
/* Max power budget per manager */
|
||||
if (pw_budget > 6000000)
|
||||
pw_budget = 6000000;
|
||||
@ -1162,12 +1166,44 @@ pd692x0_write_ports_matrix(struct pd692x0_priv *priv,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pd692x0_of_put_managers(struct pd692x0_priv *priv,
|
||||
struct pd692x0_manager *manager,
|
||||
int nmanagers)
|
||||
{
|
||||
int i, j;
|
||||
|
||||
for (i = 0; i < nmanagers; i++) {
|
||||
for (j = 0; j < manager[i].nports; j++)
|
||||
of_node_put(manager[i].port_node[j]);
|
||||
of_node_put(manager[i].node);
|
||||
}
|
||||
}
|
||||
|
||||
static void pd692x0_managers_free_pw_budget(struct pd692x0_priv *priv)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < PD692X0_MAX_MANAGERS; i++) {
|
||||
struct regulator *supply;
|
||||
|
||||
if (!priv->manager_reg[i] || !priv->manager_pw_budget[i])
|
||||
continue;
|
||||
|
||||
supply = priv->manager_reg[i]->supply;
|
||||
if (!supply)
|
||||
continue;
|
||||
|
||||
regulator_free_power_budget(supply,
|
||||
priv->manager_pw_budget[i]);
|
||||
}
|
||||
}
|
||||
|
||||
static int pd692x0_setup_pi_matrix(struct pse_controller_dev *pcdev)
|
||||
{
|
||||
struct pd692x0_manager *manager __free(kfree) = NULL;
|
||||
struct pd692x0_priv *priv = to_pd692x0_priv(pcdev);
|
||||
struct pd692x0_matrix port_matrix[PD692X0_MAX_PIS];
|
||||
int ret, i, j, nmanagers;
|
||||
int ret, nmanagers;
|
||||
|
||||
/* Should we flash the port matrix */
|
||||
if (priv->fw_state != PD692X0_FW_OK &&
|
||||
@ -1185,31 +1221,27 @@ static int pd692x0_setup_pi_matrix(struct pse_controller_dev *pcdev)
|
||||
nmanagers = ret;
|
||||
ret = pd692x0_register_managers_regulator(priv, manager, nmanagers);
|
||||
if (ret)
|
||||
goto out;
|
||||
goto err_of_managers;
|
||||
|
||||
ret = pd692x0_configure_managers(priv, nmanagers);
|
||||
if (ret)
|
||||
goto out;
|
||||
goto err_of_managers;
|
||||
|
||||
ret = pd692x0_set_ports_matrix(priv, manager, nmanagers, port_matrix);
|
||||
if (ret)
|
||||
goto out;
|
||||
goto err_managers_req_pw;
|
||||
|
||||
ret = pd692x0_write_ports_matrix(priv, port_matrix);
|
||||
if (ret)
|
||||
goto out;
|
||||
goto err_managers_req_pw;
|
||||
|
||||
out:
|
||||
for (i = 0; i < nmanagers; i++) {
|
||||
struct regulator *supply = priv->manager_reg[i]->supply;
|
||||
pd692x0_of_put_managers(priv, manager, nmanagers);
|
||||
return 0;
|
||||
|
||||
regulator_free_power_budget(supply,
|
||||
priv->manager_pw_budget[i]);
|
||||
|
||||
for (j = 0; j < manager[i].nports; j++)
|
||||
of_node_put(manager[i].port_node[j]);
|
||||
of_node_put(manager[i].node);
|
||||
}
|
||||
err_managers_req_pw:
|
||||
pd692x0_managers_free_pw_budget(priv);
|
||||
err_of_managers:
|
||||
pd692x0_of_put_managers(priv, manager, nmanagers);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -1748,6 +1780,7 @@ static void pd692x0_i2c_remove(struct i2c_client *client)
|
||||
{
|
||||
struct pd692x0_priv *priv = i2c_get_clientdata(client);
|
||||
|
||||
pd692x0_managers_free_pw_budget(priv);
|
||||
firmware_upload_unregister(priv->fwl);
|
||||
}
|
||||
|
||||
|
@ -676,7 +676,7 @@ static int ax88772_init_mdio(struct usbnet *dev)
|
||||
priv->mdio->read = &asix_mdio_bus_read;
|
||||
priv->mdio->write = &asix_mdio_bus_write;
|
||||
priv->mdio->name = "Asix MDIO Bus";
|
||||
priv->mdio->phy_mask = ~(BIT(priv->phy_addr) | BIT(AX_EMBD_PHY_ADDR));
|
||||
priv->mdio->phy_mask = ~(BIT(priv->phy_addr & 0x1f) | BIT(AX_EMBD_PHY_ADDR));
|
||||
/* mii bus name is usb-<usb bus number>-<usb device number> */
|
||||
snprintf(priv->mdio->id, MII_BUS_ID_SIZE, "usb-%03d:%03d",
|
||||
dev->udev->bus->busnum, dev->udev->devnum);
|
||||
|
@ -2087,6 +2087,13 @@ static const struct usb_device_id cdc_devs[] = {
|
||||
.driver_info = (unsigned long)&wwan_info,
|
||||
},
|
||||
|
||||
/* Intel modem (label from OEM reads Fibocom L850-GL) */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x8087, 0x095a,
|
||||
USB_CLASS_COMM,
|
||||
USB_CDC_SUBCLASS_NCM, USB_CDC_PROTO_NONE),
|
||||
.driver_info = (unsigned long)&wwan_info,
|
||||
},
|
||||
|
||||
/* DisplayLink docking stations */
|
||||
{ .match_flags = USB_DEVICE_ID_MATCH_INT_INFO
|
||||
| USB_DEVICE_ID_MATCH_VENDOR,
|
||||
|
@ -647,7 +647,7 @@ static inline void sco_exit(void)
|
||||
#if IS_ENABLED(CONFIG_BT_LE)
|
||||
int iso_init(void);
|
||||
int iso_exit(void);
|
||||
bool iso_enabled(void);
|
||||
bool iso_inited(void);
|
||||
#else
|
||||
static inline int iso_init(void)
|
||||
{
|
||||
@ -659,7 +659,7 @@ static inline int iso_exit(void)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline bool iso_enabled(void)
|
||||
static inline bool iso_inited(void)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
@ -129,7 +129,9 @@ struct hci_conn_hash {
|
||||
struct list_head list;
|
||||
unsigned int acl_num;
|
||||
unsigned int sco_num;
|
||||
unsigned int iso_num;
|
||||
unsigned int cis_num;
|
||||
unsigned int bis_num;
|
||||
unsigned int pa_num;
|
||||
unsigned int le_num;
|
||||
unsigned int le_num_peripheral;
|
||||
};
|
||||
@ -1014,9 +1016,13 @@ static inline void hci_conn_hash_add(struct hci_dev *hdev, struct hci_conn *c)
|
||||
h->sco_num++;
|
||||
break;
|
||||
case CIS_LINK:
|
||||
h->cis_num++;
|
||||
break;
|
||||
case BIS_LINK:
|
||||
h->bis_num++;
|
||||
break;
|
||||
case PA_LINK:
|
||||
h->iso_num++;
|
||||
h->pa_num++;
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -1042,9 +1048,13 @@ static inline void hci_conn_hash_del(struct hci_dev *hdev, struct hci_conn *c)
|
||||
h->sco_num--;
|
||||
break;
|
||||
case CIS_LINK:
|
||||
h->cis_num--;
|
||||
break;
|
||||
case BIS_LINK:
|
||||
h->bis_num--;
|
||||
break;
|
||||
case PA_LINK:
|
||||
h->iso_num--;
|
||||
h->pa_num--;
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -1061,9 +1071,11 @@ static inline unsigned int hci_conn_num(struct hci_dev *hdev, __u8 type)
|
||||
case ESCO_LINK:
|
||||
return h->sco_num;
|
||||
case CIS_LINK:
|
||||
return h->cis_num;
|
||||
case BIS_LINK:
|
||||
return h->bis_num;
|
||||
case PA_LINK:
|
||||
return h->iso_num;
|
||||
return h->pa_num;
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
@ -1073,7 +1085,15 @@ static inline unsigned int hci_conn_count(struct hci_dev *hdev)
|
||||
{
|
||||
struct hci_conn_hash *c = &hdev->conn_hash;
|
||||
|
||||
return c->acl_num + c->sco_num + c->le_num + c->iso_num;
|
||||
return c->acl_num + c->sco_num + c->le_num + c->cis_num + c->bis_num +
|
||||
c->pa_num;
|
||||
}
|
||||
|
||||
static inline unsigned int hci_iso_count(struct hci_dev *hdev)
|
||||
{
|
||||
struct hci_conn_hash *c = &hdev->conn_hash;
|
||||
|
||||
return c->cis_num + c->bis_num;
|
||||
}
|
||||
|
||||
static inline bool hci_conn_valid(struct hci_dev *hdev, struct hci_conn *conn)
|
||||
@ -1915,6 +1935,8 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
|
||||
!hci_dev_test_flag(dev, HCI_RPA_EXPIRED))
|
||||
#define adv_rpa_valid(adv) (bacmp(&adv->random_addr, BDADDR_ANY) && \
|
||||
!adv->rpa_expired)
|
||||
#define le_enabled(dev) (lmp_le_capable(dev) && \
|
||||
hci_dev_test_flag(dev, HCI_LE_ENABLED))
|
||||
|
||||
#define scan_1m(dev) (((dev)->le_tx_def_phys & HCI_LE_SET_PHY_1M) || \
|
||||
((dev)->le_rx_def_phys & HCI_LE_SET_PHY_1M))
|
||||
@ -1932,6 +1954,7 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
|
||||
((dev)->le_rx_def_phys & HCI_LE_SET_PHY_CODED))
|
||||
|
||||
#define ll_privacy_capable(dev) ((dev)->le_features[0] & HCI_LE_LL_PRIVACY)
|
||||
#define ll_privacy_enabled(dev) (le_enabled(dev) && ll_privacy_capable(dev))
|
||||
|
||||
#define privacy_mode_capable(dev) (ll_privacy_capable(dev) && \
|
||||
((dev)->commands[39] & 0x04))
|
||||
@ -1981,14 +2004,23 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
|
||||
|
||||
/* CIS Master/Slave and BIS support */
|
||||
#define iso_capable(dev) (cis_capable(dev) || bis_capable(dev))
|
||||
#define iso_enabled(dev) (le_enabled(dev) && iso_capable(dev))
|
||||
#define cis_capable(dev) \
|
||||
(cis_central_capable(dev) || cis_peripheral_capable(dev))
|
||||
#define cis_enabled(dev) (le_enabled(dev) && cis_capable(dev))
|
||||
#define cis_central_capable(dev) \
|
||||
((dev)->le_features[3] & HCI_LE_CIS_CENTRAL)
|
||||
#define cis_central_enabled(dev) \
|
||||
(le_enabled(dev) && cis_central_capable(dev))
|
||||
#define cis_peripheral_capable(dev) \
|
||||
((dev)->le_features[3] & HCI_LE_CIS_PERIPHERAL)
|
||||
#define cis_peripheral_enabled(dev) \
|
||||
(le_enabled(dev) && cis_peripheral_capable(dev))
|
||||
#define bis_capable(dev) ((dev)->le_features[3] & HCI_LE_ISO_BROADCASTER)
|
||||
#define sync_recv_capable(dev) ((dev)->le_features[3] & HCI_LE_ISO_SYNC_RECEIVER)
|
||||
#define bis_enabled(dev) (le_enabled(dev) && bis_capable(dev))
|
||||
#define sync_recv_capable(dev) \
|
||||
((dev)->le_features[3] & HCI_LE_ISO_SYNC_RECEIVER)
|
||||
#define sync_recv_enabled(dev) (le_enabled(dev) && sync_recv_capable(dev))
|
||||
|
||||
#define mws_transport_config_capable(dev) (((dev)->commands[30] & 0x08) && \
|
||||
(!hci_test_quirk((dev), HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG)))
|
||||
|
@ -307,6 +307,7 @@ int bond_3ad_lacpdu_recv(const struct sk_buff *skb, struct bonding *bond,
|
||||
struct slave *slave);
|
||||
int bond_3ad_set_carrier(struct bonding *bond);
|
||||
void bond_3ad_update_lacp_rate(struct bonding *bond);
|
||||
void bond_3ad_update_lacp_active(struct bonding *bond);
|
||||
void bond_3ad_update_ad_actor_settings(struct bonding *bond);
|
||||
int bond_3ad_stats_fill(struct sk_buff *skb, struct bond_3ad_stats *stats);
|
||||
size_t bond_3ad_stats_size(void);
|
||||
|
@ -1038,12 +1038,17 @@ static inline struct sk_buff *qdisc_dequeue_internal(struct Qdisc *sch, bool dir
|
||||
skb = __skb_dequeue(&sch->gso_skb);
|
||||
if (skb) {
|
||||
sch->q.qlen--;
|
||||
qdisc_qstats_backlog_dec(sch, skb);
|
||||
return skb;
|
||||
}
|
||||
if (direct)
|
||||
return __qdisc_dequeue_head(&sch->q);
|
||||
else
|
||||
if (direct) {
|
||||
skb = __qdisc_dequeue_head(&sch->q);
|
||||
if (skb)
|
||||
qdisc_qstats_backlog_dec(sch, skb);
|
||||
return skb;
|
||||
} else {
|
||||
return sch->dequeue(sch);
|
||||
}
|
||||
}
|
||||
|
||||
static inline struct sk_buff *qdisc_dequeue_head(struct Qdisc *sch)
|
||||
|
@ -339,7 +339,8 @@ static int hci_enhanced_setup_sync(struct hci_dev *hdev, void *data)
|
||||
case BT_CODEC_TRANSPARENT:
|
||||
if (!find_next_esco_param(conn, esco_param_msbc,
|
||||
ARRAY_SIZE(esco_param_msbc)))
|
||||
return false;
|
||||
return -EINVAL;
|
||||
|
||||
param = &esco_param_msbc[conn->attempt - 1];
|
||||
cp.tx_coding_format.id = 0x03;
|
||||
cp.rx_coding_format.id = 0x03;
|
||||
@ -830,7 +831,17 @@ static void bis_cleanup(struct hci_conn *conn)
|
||||
/* Check if ISO connection is a BIS and terminate advertising
|
||||
* set and BIG if there are no other connections using it.
|
||||
*/
|
||||
bis = hci_conn_hash_lookup_big(hdev, conn->iso_qos.bcast.big);
|
||||
bis = hci_conn_hash_lookup_big_state(hdev,
|
||||
conn->iso_qos.bcast.big,
|
||||
BT_CONNECTED,
|
||||
HCI_ROLE_MASTER);
|
||||
if (bis)
|
||||
return;
|
||||
|
||||
bis = hci_conn_hash_lookup_big_state(hdev,
|
||||
conn->iso_qos.bcast.big,
|
||||
BT_CONNECT,
|
||||
HCI_ROLE_MASTER);
|
||||
if (bis)
|
||||
return;
|
||||
|
||||
@ -2249,7 +2260,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst,
|
||||
* the start periodic advertising and create BIG commands have
|
||||
* been queued
|
||||
*/
|
||||
hci_conn_hash_list_state(hdev, bis_mark_per_adv, PA_LINK,
|
||||
hci_conn_hash_list_state(hdev, bis_mark_per_adv, BIS_LINK,
|
||||
BT_BOUND, &data);
|
||||
|
||||
/* Queue start periodic advertising and create BIG */
|
||||
|
@ -6745,8 +6745,8 @@ static void hci_le_cis_established_evt(struct hci_dev *hdev, void *data,
|
||||
qos->ucast.out.latency =
|
||||
DIV_ROUND_CLOSEST(get_unaligned_le24(ev->p_latency),
|
||||
1000);
|
||||
qos->ucast.in.sdu = le16_to_cpu(ev->c_mtu);
|
||||
qos->ucast.out.sdu = le16_to_cpu(ev->p_mtu);
|
||||
qos->ucast.in.sdu = ev->c_bn ? le16_to_cpu(ev->c_mtu) : 0;
|
||||
qos->ucast.out.sdu = ev->p_bn ? le16_to_cpu(ev->p_mtu) : 0;
|
||||
qos->ucast.in.phy = ev->c_phy;
|
||||
qos->ucast.out.phy = ev->p_phy;
|
||||
break;
|
||||
@ -6760,8 +6760,8 @@ static void hci_le_cis_established_evt(struct hci_dev *hdev, void *data,
|
||||
qos->ucast.in.latency =
|
||||
DIV_ROUND_CLOSEST(get_unaligned_le24(ev->p_latency),
|
||||
1000);
|
||||
qos->ucast.out.sdu = le16_to_cpu(ev->c_mtu);
|
||||
qos->ucast.in.sdu = le16_to_cpu(ev->p_mtu);
|
||||
qos->ucast.out.sdu = ev->c_bn ? le16_to_cpu(ev->c_mtu) : 0;
|
||||
qos->ucast.in.sdu = ev->p_bn ? le16_to_cpu(ev->p_mtu) : 0;
|
||||
qos->ucast.out.phy = ev->c_phy;
|
||||
qos->ucast.in.phy = ev->p_phy;
|
||||
break;
|
||||
@ -6957,9 +6957,14 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
|
||||
continue;
|
||||
}
|
||||
|
||||
if (ev->status != 0x42)
|
||||
if (ev->status != 0x42) {
|
||||
/* Mark PA sync as established */
|
||||
set_bit(HCI_CONN_PA_SYNC, &bis->flags);
|
||||
/* Reset cleanup callback of PA Sync so it doesn't
|
||||
* terminate the sync when deleting the connection.
|
||||
*/
|
||||
conn->cleanup = NULL;
|
||||
}
|
||||
|
||||
bis->sync_handle = conn->sync_handle;
|
||||
bis->iso_qos.bcast.big = ev->handle;
|
||||
|
@ -3344,7 +3344,7 @@ static int hci_powered_update_adv_sync(struct hci_dev *hdev)
|
||||
* advertising data. This also applies to the case
|
||||
* where BR/EDR was toggled during the AUTO_OFF phase.
|
||||
*/
|
||||
if (hci_dev_test_flag(hdev, HCI_ADVERTISING) ||
|
||||
if (hci_dev_test_flag(hdev, HCI_ADVERTISING) &&
|
||||
list_empty(&hdev->adv_instances)) {
|
||||
if (ext_adv_capable(hdev)) {
|
||||
err = hci_setup_ext_adv_instance_sync(hdev, 0x00);
|
||||
@ -4531,14 +4531,14 @@ static int hci_le_set_host_feature_sync(struct hci_dev *hdev)
|
||||
{
|
||||
struct hci_cp_le_set_host_feature cp;
|
||||
|
||||
if (!cis_capable(hdev))
|
||||
if (!iso_capable(hdev))
|
||||
return 0;
|
||||
|
||||
memset(&cp, 0, sizeof(cp));
|
||||
|
||||
/* Connected Isochronous Channels (Host Support) */
|
||||
cp.bit_number = 32;
|
||||
cp.bit_value = 1;
|
||||
cp.bit_value = iso_enabled(hdev) ? 0x01 : 0x00;
|
||||
|
||||
return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_HOST_FEATURE,
|
||||
sizeof(cp), &cp, HCI_CMD_TIMEOUT);
|
||||
@ -6985,8 +6985,6 @@ static void create_pa_complete(struct hci_dev *hdev, void *data, int err)
|
||||
|
||||
hci_dev_lock(hdev);
|
||||
|
||||
hci_dev_clear_flag(hdev, HCI_PA_SYNC);
|
||||
|
||||
if (!hci_conn_valid(hdev, conn))
|
||||
clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
|
||||
|
||||
@ -7047,10 +7045,13 @@ static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data)
|
||||
/* SID has not been set listen for HCI_EV_LE_EXT_ADV_REPORT to update
|
||||
* it.
|
||||
*/
|
||||
if (conn->sid == HCI_SID_INVALID)
|
||||
__hci_cmd_sync_status_sk(hdev, HCI_OP_NOP, 0, NULL,
|
||||
if (conn->sid == HCI_SID_INVALID) {
|
||||
err = __hci_cmd_sync_status_sk(hdev, HCI_OP_NOP, 0, NULL,
|
||||
HCI_EV_LE_EXT_ADV_REPORT,
|
||||
conn->conn_timeout, NULL);
|
||||
if (err == -ETIMEDOUT)
|
||||
goto done;
|
||||
}
|
||||
|
||||
memset(&cp, 0, sizeof(cp));
|
||||
cp.options = qos->bcast.options;
|
||||
@ -7080,6 +7081,12 @@ static int hci_le_pa_create_sync(struct hci_dev *hdev, void *data)
|
||||
__hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC_CANCEL,
|
||||
0, NULL, HCI_CMD_TIMEOUT);
|
||||
|
||||
done:
|
||||
hci_dev_clear_flag(hdev, HCI_PA_SYNC);
|
||||
|
||||
/* Update passive scan since HCI_PA_SYNC flag has been cleared */
|
||||
hci_update_passive_scan_sync(hdev);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -1347,7 +1347,7 @@ static int iso_sock_getname(struct socket *sock, struct sockaddr *addr,
|
||||
bacpy(&sa->iso_bdaddr, &iso_pi(sk)->dst);
|
||||
sa->iso_bdaddr_type = iso_pi(sk)->dst_type;
|
||||
|
||||
if (hcon && hcon->type == BIS_LINK) {
|
||||
if (hcon && (hcon->type == BIS_LINK || hcon->type == PA_LINK)) {
|
||||
sa->iso_bc->bc_sid = iso_pi(sk)->bc_sid;
|
||||
sa->iso_bc->bc_num_bis = iso_pi(sk)->bc_num_bis;
|
||||
memcpy(sa->iso_bc->bc_bis, iso_pi(sk)->bc_bis,
|
||||
@ -2483,11 +2483,11 @@ static const struct net_proto_family iso_sock_family_ops = {
|
||||
.create = iso_sock_create,
|
||||
};
|
||||
|
||||
static bool iso_inited;
|
||||
static bool inited;
|
||||
|
||||
bool iso_enabled(void)
|
||||
bool iso_inited(void)
|
||||
{
|
||||
return iso_inited;
|
||||
return inited;
|
||||
}
|
||||
|
||||
int iso_init(void)
|
||||
@ -2496,7 +2496,7 @@ int iso_init(void)
|
||||
|
||||
BUILD_BUG_ON(sizeof(struct sockaddr_iso) > sizeof(struct sockaddr));
|
||||
|
||||
if (iso_inited)
|
||||
if (inited)
|
||||
return -EALREADY;
|
||||
|
||||
err = proto_register(&iso_proto, 0);
|
||||
@ -2524,7 +2524,7 @@ int iso_init(void)
|
||||
iso_debugfs = debugfs_create_file("iso", 0444, bt_debugfs,
|
||||
NULL, &iso_debugfs_fops);
|
||||
|
||||
iso_inited = true;
|
||||
inited = true;
|
||||
|
||||
return 0;
|
||||
|
||||
@ -2535,7 +2535,7 @@ error:
|
||||
|
||||
int iso_exit(void)
|
||||
{
|
||||
if (!iso_inited)
|
||||
if (!inited)
|
||||
return -EALREADY;
|
||||
|
||||
bt_procfs_cleanup(&init_net, "iso");
|
||||
@ -2549,7 +2549,7 @@ int iso_exit(void)
|
||||
|
||||
proto_unregister(&iso_proto);
|
||||
|
||||
iso_inited = false;
|
||||
inited = false;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -922,19 +922,19 @@ static u32 get_current_settings(struct hci_dev *hdev)
|
||||
if (hci_dev_test_flag(hdev, HCI_WIDEBAND_SPEECH_ENABLED))
|
||||
settings |= MGMT_SETTING_WIDEBAND_SPEECH;
|
||||
|
||||
if (cis_central_capable(hdev))
|
||||
if (cis_central_enabled(hdev))
|
||||
settings |= MGMT_SETTING_CIS_CENTRAL;
|
||||
|
||||
if (cis_peripheral_capable(hdev))
|
||||
if (cis_peripheral_enabled(hdev))
|
||||
settings |= MGMT_SETTING_CIS_PERIPHERAL;
|
||||
|
||||
if (bis_capable(hdev))
|
||||
if (bis_enabled(hdev))
|
||||
settings |= MGMT_SETTING_ISO_BROADCASTER;
|
||||
|
||||
if (sync_recv_capable(hdev))
|
||||
if (sync_recv_enabled(hdev))
|
||||
settings |= MGMT_SETTING_ISO_SYNC_RECEIVER;
|
||||
|
||||
if (ll_privacy_capable(hdev))
|
||||
if (ll_privacy_enabled(hdev))
|
||||
settings |= MGMT_SETTING_LL_PRIVACY;
|
||||
|
||||
return settings;
|
||||
@ -4513,7 +4513,7 @@ static int read_exp_features_info(struct sock *sk, struct hci_dev *hdev,
|
||||
}
|
||||
|
||||
if (IS_ENABLED(CONFIG_BT_LE)) {
|
||||
flags = iso_enabled() ? BIT(0) : 0;
|
||||
flags = iso_inited() ? BIT(0) : 0;
|
||||
memcpy(rp->features[idx].uuid, iso_socket_uuid, 16);
|
||||
rp->features[idx].flags = cpu_to_le32(flags);
|
||||
idx++;
|
||||
|
@ -4818,6 +4818,14 @@ void br_multicast_set_query_intvl(struct net_bridge_mcast *brmctx,
|
||||
intvl_jiffies = BR_MULTICAST_QUERY_INTVL_MIN;
|
||||
}
|
||||
|
||||
if (intvl_jiffies > BR_MULTICAST_QUERY_INTVL_MAX) {
|
||||
br_info(brmctx->br,
|
||||
"trying to set multicast query interval above maximum, setting to %lu (%ums)\n",
|
||||
jiffies_to_clock_t(BR_MULTICAST_QUERY_INTVL_MAX),
|
||||
jiffies_to_msecs(BR_MULTICAST_QUERY_INTVL_MAX));
|
||||
intvl_jiffies = BR_MULTICAST_QUERY_INTVL_MAX;
|
||||
}
|
||||
|
||||
brmctx->multicast_query_interval = intvl_jiffies;
|
||||
}
|
||||
|
||||
@ -4834,6 +4842,14 @@ void br_multicast_set_startup_query_intvl(struct net_bridge_mcast *brmctx,
|
||||
intvl_jiffies = BR_MULTICAST_STARTUP_QUERY_INTVL_MIN;
|
||||
}
|
||||
|
||||
if (intvl_jiffies > BR_MULTICAST_STARTUP_QUERY_INTVL_MAX) {
|
||||
br_info(brmctx->br,
|
||||
"trying to set multicast startup query interval above maximum, setting to %lu (%ums)\n",
|
||||
jiffies_to_clock_t(BR_MULTICAST_STARTUP_QUERY_INTVL_MAX),
|
||||
jiffies_to_msecs(BR_MULTICAST_STARTUP_QUERY_INTVL_MAX));
|
||||
intvl_jiffies = BR_MULTICAST_STARTUP_QUERY_INTVL_MAX;
|
||||
}
|
||||
|
||||
brmctx->multicast_startup_query_interval = intvl_jiffies;
|
||||
}
|
||||
|
||||
|
@ -31,6 +31,8 @@
|
||||
#define BR_MULTICAST_DEFAULT_HASH_MAX 4096
|
||||
#define BR_MULTICAST_QUERY_INTVL_MIN msecs_to_jiffies(1000)
|
||||
#define BR_MULTICAST_STARTUP_QUERY_INTVL_MIN BR_MULTICAST_QUERY_INTVL_MIN
|
||||
#define BR_MULTICAST_QUERY_INTVL_MAX msecs_to_jiffies(86400000) /* 24 hours */
|
||||
#define BR_MULTICAST_STARTUP_QUERY_INTVL_MAX BR_MULTICAST_QUERY_INTVL_MAX
|
||||
|
||||
#define BR_HWDOM_MAX BITS_PER_LONG
|
||||
|
||||
|
@ -3779,6 +3779,18 @@ static netdev_features_t gso_features_check(const struct sk_buff *skb,
|
||||
features &= ~NETIF_F_TSO_MANGLEID;
|
||||
}
|
||||
|
||||
/* NETIF_F_IPV6_CSUM does not support IPv6 extension headers,
|
||||
* so neither does TSO that depends on it.
|
||||
*/
|
||||
if (features & NETIF_F_IPV6_CSUM &&
|
||||
(skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6 ||
|
||||
(skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 &&
|
||||
vlan_get_protocol(skb) == htons(ETH_P_IPV6))) &&
|
||||
skb_transport_header_was_set(skb) &&
|
||||
skb_network_header_len(skb) != sizeof(struct ipv6hdr) &&
|
||||
!ipv6_has_hopopt_jumbo(skb))
|
||||
features &= ~(NETIF_F_IPV6_CSUM | NETIF_F_TSO6 | NETIF_F_GSO_UDP_L4);
|
||||
|
||||
return features;
|
||||
}
|
||||
|
||||
|
@ -63,8 +63,14 @@ static rx_handler_result_t hsr_handle_frame(struct sk_buff **pskb)
|
||||
skb_push(skb, ETH_HLEN);
|
||||
skb_reset_mac_header(skb);
|
||||
if ((!hsr->prot_version && protocol == htons(ETH_P_PRP)) ||
|
||||
protocol == htons(ETH_P_HSR))
|
||||
protocol == htons(ETH_P_HSR)) {
|
||||
if (!pskb_may_pull(skb, ETH_HLEN + HSR_HLEN)) {
|
||||
kfree_skb(skb);
|
||||
goto finish_consume;
|
||||
}
|
||||
|
||||
skb_set_network_header(skb, ETH_HLEN + HSR_HLEN);
|
||||
}
|
||||
skb_reset_mac_len(skb);
|
||||
|
||||
/* Only the frames received over the interlink port will assign a
|
||||
|
@ -247,8 +247,7 @@ void nf_send_reset(struct net *net, struct sock *sk, struct sk_buff *oldskb,
|
||||
if (!oth)
|
||||
return;
|
||||
|
||||
if ((hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) &&
|
||||
nf_reject_fill_skb_dst(oldskb) < 0)
|
||||
if (!skb_dst(oldskb) && nf_reject_fill_skb_dst(oldskb) < 0)
|
||||
return;
|
||||
|
||||
if (skb_rtable(oldskb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))
|
||||
@ -321,8 +320,7 @@ void nf_send_unreach(struct sk_buff *skb_in, int code, int hook)
|
||||
if (iph->frag_off & htons(IP_OFFSET))
|
||||
return;
|
||||
|
||||
if ((hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) &&
|
||||
nf_reject_fill_skb_dst(skb_in) < 0)
|
||||
if (!skb_dst(skb_in) && nf_reject_fill_skb_dst(skb_in) < 0)
|
||||
return;
|
||||
|
||||
if (skb_csum_unnecessary(skb_in) ||
|
||||
|
@ -293,7 +293,7 @@ void nf_send_reset6(struct net *net, struct sock *sk, struct sk_buff *oldskb,
|
||||
fl6.fl6_sport = otcph->dest;
|
||||
fl6.fl6_dport = otcph->source;
|
||||
|
||||
if (hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) {
|
||||
if (!skb_dst(oldskb)) {
|
||||
nf_ip6_route(net, &dst, flowi6_to_flowi(&fl6), false);
|
||||
if (!dst)
|
||||
return;
|
||||
@ -397,8 +397,7 @@ void nf_send_unreach6(struct net *net, struct sk_buff *skb_in,
|
||||
if (hooknum == NF_INET_LOCAL_OUT && skb_in->dev == NULL)
|
||||
skb_in->dev = net->loopback_dev;
|
||||
|
||||
if ((hooknum == NF_INET_PRE_ROUTING || hooknum == NF_INET_INGRESS) &&
|
||||
nf_reject6_fill_skb_dst(skb_in) < 0)
|
||||
if (!skb_dst(skb_in) && nf_reject6_fill_skb_dst(skb_in) < 0)
|
||||
return;
|
||||
|
||||
icmpv6_send(skb_in, ICMPV6_DEST_UNREACH, code, 0);
|
||||
|
@ -35,6 +35,7 @@
|
||||
#include <net/xfrm.h>
|
||||
|
||||
#include <crypto/hash.h>
|
||||
#include <crypto/utils.h>
|
||||
#include <net/seg6.h>
|
||||
#include <net/genetlink.h>
|
||||
#include <net/seg6_hmac.h>
|
||||
@ -280,7 +281,7 @@ bool seg6_hmac_validate_skb(struct sk_buff *skb)
|
||||
if (seg6_hmac_compute(hinfo, srh, &ipv6_hdr(skb)->saddr, hmac_output))
|
||||
return false;
|
||||
|
||||
if (memcmp(hmac_output, tlv->hmac, SEG6_HMAC_FIELD_LEN) != 0)
|
||||
if (crypto_memneq(hmac_output, tlv->hmac, SEG6_HMAC_FIELD_LEN))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
@ -304,6 +305,9 @@ int seg6_hmac_info_add(struct net *net, u32 key, struct seg6_hmac_info *hinfo)
|
||||
struct seg6_pernet_data *sdata = seg6_pernet(net);
|
||||
int err;
|
||||
|
||||
if (!__hmac_get_algo(hinfo->alg_id))
|
||||
return -EINVAL;
|
||||
|
||||
err = rhashtable_lookup_insert_fast(&sdata->hmac_infos, &hinfo->node,
|
||||
rht_params);
|
||||
|
||||
|
@ -1118,7 +1118,9 @@ static bool add_addr_hmac_valid(struct mptcp_sock *msk,
|
||||
return hmac == mp_opt->ahmac;
|
||||
}
|
||||
|
||||
/* Return false if a subflow has been reset, else return true */
|
||||
/* Return false in case of error (or subflow has been reset),
|
||||
* else return true.
|
||||
*/
|
||||
bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
|
||||
{
|
||||
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);
|
||||
@ -1222,7 +1224,7 @@ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
|
||||
|
||||
mpext = skb_ext_add(skb, SKB_EXT_MPTCP);
|
||||
if (!mpext)
|
||||
return true;
|
||||
return false;
|
||||
|
||||
memset(mpext, 0, sizeof(*mpext));
|
||||
|
||||
|
@ -274,6 +274,7 @@ static void mptcp_pm_add_timer(struct timer_list *timer)
|
||||
add_timer);
|
||||
struct mptcp_sock *msk = entry->sock;
|
||||
struct sock *sk = (struct sock *)msk;
|
||||
unsigned int timeout;
|
||||
|
||||
pr_debug("msk=%p\n", msk);
|
||||
|
||||
@ -291,6 +292,10 @@ static void mptcp_pm_add_timer(struct timer_list *timer)
|
||||
goto out;
|
||||
}
|
||||
|
||||
timeout = mptcp_get_add_addr_timeout(sock_net(sk));
|
||||
if (!timeout)
|
||||
goto out;
|
||||
|
||||
spin_lock_bh(&msk->pm.lock);
|
||||
|
||||
if (!mptcp_pm_should_add_signal_addr(msk)) {
|
||||
@ -302,7 +307,7 @@ static void mptcp_pm_add_timer(struct timer_list *timer)
|
||||
|
||||
if (entry->retrans_times < ADD_ADDR_RETRANS_MAX)
|
||||
sk_reset_timer(sk, timer,
|
||||
jiffies + mptcp_get_add_addr_timeout(sock_net(sk)));
|
||||
jiffies + timeout);
|
||||
|
||||
spin_unlock_bh(&msk->pm.lock);
|
||||
|
||||
@ -344,6 +349,7 @@ bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk,
|
||||
struct mptcp_pm_add_entry *add_entry = NULL;
|
||||
struct sock *sk = (struct sock *)msk;
|
||||
struct net *net = sock_net(sk);
|
||||
unsigned int timeout;
|
||||
|
||||
lockdep_assert_held(&msk->pm.lock);
|
||||
|
||||
@ -353,9 +359,7 @@ bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk,
|
||||
if (WARN_ON_ONCE(mptcp_pm_is_kernel(msk)))
|
||||
return false;
|
||||
|
||||
sk_reset_timer(sk, &add_entry->add_timer,
|
||||
jiffies + mptcp_get_add_addr_timeout(net));
|
||||
return true;
|
||||
goto reset_timer;
|
||||
}
|
||||
|
||||
add_entry = kmalloc(sizeof(*add_entry), GFP_ATOMIC);
|
||||
@ -369,8 +373,10 @@ bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk,
|
||||
add_entry->retrans_times = 0;
|
||||
|
||||
timer_setup(&add_entry->add_timer, mptcp_pm_add_timer, 0);
|
||||
sk_reset_timer(sk, &add_entry->add_timer,
|
||||
jiffies + mptcp_get_add_addr_timeout(net));
|
||||
reset_timer:
|
||||
timeout = mptcp_get_add_addr_timeout(net);
|
||||
if (timeout)
|
||||
sk_reset_timer(sk, &add_entry->add_timer, jiffies + timeout);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
@ -1085,7 +1085,6 @@ static void __flush_addrs(struct list_head *list)
|
||||
static void __reset_counters(struct pm_nl_pernet *pernet)
|
||||
{
|
||||
WRITE_ONCE(pernet->add_addr_signal_max, 0);
|
||||
WRITE_ONCE(pernet->add_addr_accept_max, 0);
|
||||
WRITE_ONCE(pernet->local_addr_max, 0);
|
||||
pernet->addrs = 0;
|
||||
}
|
||||
|
@ -1750,7 +1750,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
|
||||
ktime_t now = ktime_get();
|
||||
struct cake_tin_data *b;
|
||||
struct cake_flow *flow;
|
||||
u32 idx;
|
||||
u32 idx, tin;
|
||||
|
||||
/* choose flow to insert into */
|
||||
idx = cake_classify(sch, &b, skb, q->flow_mode, &ret);
|
||||
@ -1760,6 +1760,7 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
|
||||
__qdisc_drop(skb, to_free);
|
||||
return ret;
|
||||
}
|
||||
tin = (u32)(b - q->tins);
|
||||
idx--;
|
||||
flow = &b->flows[idx];
|
||||
|
||||
@ -1927,13 +1928,22 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
|
||||
q->buffer_max_used = q->buffer_used;
|
||||
|
||||
if (q->buffer_used > q->buffer_limit) {
|
||||
bool same_flow = false;
|
||||
u32 dropped = 0;
|
||||
u32 drop_id;
|
||||
|
||||
while (q->buffer_used > q->buffer_limit) {
|
||||
dropped++;
|
||||
cake_drop(sch, to_free);
|
||||
drop_id = cake_drop(sch, to_free);
|
||||
|
||||
if ((drop_id >> 16) == tin &&
|
||||
(drop_id & 0xFFFF) == idx)
|
||||
same_flow = true;
|
||||
}
|
||||
b->drop_overlimit += dropped;
|
||||
|
||||
if (same_flow)
|
||||
return NET_XMIT_CN;
|
||||
}
|
||||
return NET_XMIT_SUCCESS;
|
||||
}
|
||||
|
@ -101,9 +101,9 @@ static const struct nla_policy codel_policy[TCA_CODEL_MAX + 1] = {
|
||||
static int codel_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
unsigned int dropped_pkts = 0, dropped_bytes = 0;
|
||||
struct codel_sched_data *q = qdisc_priv(sch);
|
||||
struct nlattr *tb[TCA_CODEL_MAX + 1];
|
||||
unsigned int qlen, dropped = 0;
|
||||
int err;
|
||||
|
||||
err = nla_parse_nested_deprecated(tb, TCA_CODEL_MAX, opt,
|
||||
@ -142,15 +142,17 @@ static int codel_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
WRITE_ONCE(q->params.ecn,
|
||||
!!nla_get_u32(tb[TCA_CODEL_ECN]));
|
||||
|
||||
qlen = sch->q.qlen;
|
||||
while (sch->q.qlen > sch->limit) {
|
||||
struct sk_buff *skb = qdisc_dequeue_internal(sch, true);
|
||||
|
||||
dropped += qdisc_pkt_len(skb);
|
||||
qdisc_qstats_backlog_dec(sch, skb);
|
||||
if (!skb)
|
||||
break;
|
||||
|
||||
dropped_pkts++;
|
||||
dropped_bytes += qdisc_pkt_len(skb);
|
||||
rtnl_qdisc_drop(skb, sch);
|
||||
}
|
||||
qdisc_tree_reduce_backlog(sch, qlen - sch->q.qlen, dropped);
|
||||
qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes);
|
||||
|
||||
sch_tree_unlock(sch);
|
||||
return 0;
|
||||
|
@ -927,7 +927,8 @@ static int dualpi2_init(struct Qdisc *sch, struct nlattr *opt,
|
||||
|
||||
q->sch = sch;
|
||||
dualpi2_reset_default(sch);
|
||||
hrtimer_setup(&q->pi2_timer, dualpi2_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
|
||||
hrtimer_setup(&q->pi2_timer, dualpi2_timer, CLOCK_MONOTONIC,
|
||||
HRTIMER_MODE_ABS_PINNED_SOFT);
|
||||
|
||||
if (opt && nla_len(opt)) {
|
||||
err = dualpi2_change(sch, opt, extack);
|
||||
@ -937,7 +938,7 @@ static int dualpi2_init(struct Qdisc *sch, struct nlattr *opt,
|
||||
}
|
||||
|
||||
hrtimer_start(&q->pi2_timer, next_pi2_timeout(q),
|
||||
HRTIMER_MODE_ABS_PINNED);
|
||||
HRTIMER_MODE_ABS_PINNED_SOFT);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1013,11 +1013,11 @@ static int fq_load_priomap(struct fq_sched_data *q,
|
||||
static int fq_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
unsigned int dropped_pkts = 0, dropped_bytes = 0;
|
||||
struct fq_sched_data *q = qdisc_priv(sch);
|
||||
struct nlattr *tb[TCA_FQ_MAX + 1];
|
||||
int err, drop_count = 0;
|
||||
unsigned drop_len = 0;
|
||||
u32 fq_log;
|
||||
int err;
|
||||
|
||||
err = nla_parse_nested_deprecated(tb, TCA_FQ_MAX, opt, fq_policy,
|
||||
NULL);
|
||||
@ -1135,16 +1135,18 @@ static int fq_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
err = fq_resize(sch, fq_log);
|
||||
sch_tree_lock(sch);
|
||||
}
|
||||
|
||||
while (sch->q.qlen > sch->limit) {
|
||||
struct sk_buff *skb = qdisc_dequeue_internal(sch, false);
|
||||
|
||||
if (!skb)
|
||||
break;
|
||||
drop_len += qdisc_pkt_len(skb);
|
||||
|
||||
dropped_pkts++;
|
||||
dropped_bytes += qdisc_pkt_len(skb);
|
||||
rtnl_kfree_skbs(skb, skb);
|
||||
drop_count++;
|
||||
}
|
||||
qdisc_tree_reduce_backlog(sch, drop_count, drop_len);
|
||||
qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes);
|
||||
|
||||
sch_tree_unlock(sch);
|
||||
return err;
|
||||
|
@ -366,6 +366,7 @@ static const struct nla_policy fq_codel_policy[TCA_FQ_CODEL_MAX + 1] = {
|
||||
static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
unsigned int dropped_pkts = 0, dropped_bytes = 0;
|
||||
struct fq_codel_sched_data *q = qdisc_priv(sch);
|
||||
struct nlattr *tb[TCA_FQ_CODEL_MAX + 1];
|
||||
u32 quantum = 0;
|
||||
@ -443,13 +444,14 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
q->memory_usage > q->memory_limit) {
|
||||
struct sk_buff *skb = qdisc_dequeue_internal(sch, false);
|
||||
|
||||
q->cstats.drop_len += qdisc_pkt_len(skb);
|
||||
if (!skb)
|
||||
break;
|
||||
|
||||
dropped_pkts++;
|
||||
dropped_bytes += qdisc_pkt_len(skb);
|
||||
rtnl_kfree_skbs(skb, skb);
|
||||
q->cstats.drop_count++;
|
||||
}
|
||||
qdisc_tree_reduce_backlog(sch, q->cstats.drop_count, q->cstats.drop_len);
|
||||
q->cstats.drop_count = 0;
|
||||
q->cstats.drop_len = 0;
|
||||
qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes);
|
||||
|
||||
sch_tree_unlock(sch);
|
||||
return 0;
|
||||
|
@ -287,10 +287,9 @@ begin:
|
||||
static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
unsigned int dropped_pkts = 0, dropped_bytes = 0;
|
||||
struct fq_pie_sched_data *q = qdisc_priv(sch);
|
||||
struct nlattr *tb[TCA_FQ_PIE_MAX + 1];
|
||||
unsigned int len_dropped = 0;
|
||||
unsigned int num_dropped = 0;
|
||||
int err;
|
||||
|
||||
err = nla_parse_nested(tb, TCA_FQ_PIE_MAX, opt, fq_pie_policy, extack);
|
||||
@ -368,11 +367,14 @@ static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
while (sch->q.qlen > sch->limit) {
|
||||
struct sk_buff *skb = qdisc_dequeue_internal(sch, false);
|
||||
|
||||
len_dropped += qdisc_pkt_len(skb);
|
||||
num_dropped += 1;
|
||||
if (!skb)
|
||||
break;
|
||||
|
||||
dropped_pkts++;
|
||||
dropped_bytes += qdisc_pkt_len(skb);
|
||||
rtnl_kfree_skbs(skb, skb);
|
||||
}
|
||||
qdisc_tree_reduce_backlog(sch, num_dropped, len_dropped);
|
||||
qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes);
|
||||
|
||||
sch_tree_unlock(sch);
|
||||
return 0;
|
||||
|
@ -508,9 +508,9 @@ static const struct nla_policy hhf_policy[TCA_HHF_MAX + 1] = {
|
||||
static int hhf_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
unsigned int dropped_pkts = 0, dropped_bytes = 0;
|
||||
struct hhf_sched_data *q = qdisc_priv(sch);
|
||||
struct nlattr *tb[TCA_HHF_MAX + 1];
|
||||
unsigned int qlen, prev_backlog;
|
||||
int err;
|
||||
u64 non_hh_quantum;
|
||||
u32 new_quantum = q->quantum;
|
||||
@ -561,15 +561,17 @@ static int hhf_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
usecs_to_jiffies(us));
|
||||
}
|
||||
|
||||
qlen = sch->q.qlen;
|
||||
prev_backlog = sch->qstats.backlog;
|
||||
while (sch->q.qlen > sch->limit) {
|
||||
struct sk_buff *skb = qdisc_dequeue_internal(sch, false);
|
||||
|
||||
if (!skb)
|
||||
break;
|
||||
|
||||
dropped_pkts++;
|
||||
dropped_bytes += qdisc_pkt_len(skb);
|
||||
rtnl_kfree_skbs(skb, skb);
|
||||
}
|
||||
qdisc_tree_reduce_backlog(sch, qlen - sch->q.qlen,
|
||||
prev_backlog - sch->qstats.backlog);
|
||||
qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes);
|
||||
|
||||
sch_tree_unlock(sch);
|
||||
return 0;
|
||||
|
@ -592,7 +592,7 @@ htb_change_class_mode(struct htb_sched *q, struct htb_class *cl, s64 *diff)
|
||||
*/
|
||||
static inline void htb_activate(struct htb_sched *q, struct htb_class *cl)
|
||||
{
|
||||
WARN_ON(cl->level || !cl->leaf.q || !cl->leaf.q->q.qlen);
|
||||
WARN_ON(cl->level || !cl->leaf.q);
|
||||
|
||||
if (!cl->prio_activity) {
|
||||
cl->prio_activity = 1 << cl->prio;
|
||||
|
@ -141,9 +141,9 @@ static const struct nla_policy pie_policy[TCA_PIE_MAX + 1] = {
|
||||
static int pie_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
unsigned int dropped_pkts = 0, dropped_bytes = 0;
|
||||
struct pie_sched_data *q = qdisc_priv(sch);
|
||||
struct nlattr *tb[TCA_PIE_MAX + 1];
|
||||
unsigned int qlen, dropped = 0;
|
||||
int err;
|
||||
|
||||
err = nla_parse_nested_deprecated(tb, TCA_PIE_MAX, opt, pie_policy,
|
||||
@ -193,15 +193,17 @@ static int pie_change(struct Qdisc *sch, struct nlattr *opt,
|
||||
nla_get_u32(tb[TCA_PIE_DQ_RATE_ESTIMATOR]));
|
||||
|
||||
/* Drop excess packets if new limit is lower */
|
||||
qlen = sch->q.qlen;
|
||||
while (sch->q.qlen > sch->limit) {
|
||||
struct sk_buff *skb = qdisc_dequeue_internal(sch, true);
|
||||
|
||||
dropped += qdisc_pkt_len(skb);
|
||||
qdisc_qstats_backlog_dec(sch, skb);
|
||||
if (!skb)
|
||||
break;
|
||||
|
||||
dropped_pkts++;
|
||||
dropped_bytes += qdisc_pkt_len(skb);
|
||||
rtnl_qdisc_drop(skb, sch);
|
||||
}
|
||||
qdisc_tree_reduce_backlog(sch, qlen - sch->q.qlen, dropped);
|
||||
qdisc_tree_reduce_backlog(sch, dropped_pkts, dropped_bytes);
|
||||
|
||||
sch_tree_unlock(sch);
|
||||
return 0;
|
||||
|
@ -2568,8 +2568,9 @@ static void smc_listen_work(struct work_struct *work)
|
||||
goto out_decl;
|
||||
}
|
||||
|
||||
smc_listen_out_connected(new_smc);
|
||||
SMC_STAT_SERV_SUCC_INC(sock_net(newclcsock->sk), ini);
|
||||
/* smc_listen_out() will release smcsk */
|
||||
smc_listen_out_connected(new_smc);
|
||||
goto out_free;
|
||||
|
||||
out_unlock:
|
||||
|
@ -1808,6 +1808,9 @@ int decrypt_skb(struct sock *sk, struct scatterlist *sgout)
|
||||
return tls_decrypt_sg(sk, NULL, sgout, &darg);
|
||||
}
|
||||
|
||||
/* All records returned from a recvmsg() call must have the same type.
|
||||
* 0 is not a valid content type. Use it as "no type reported, yet".
|
||||
*/
|
||||
static int tls_record_content_type(struct msghdr *msg, struct tls_msg *tlm,
|
||||
u8 *control)
|
||||
{
|
||||
@ -2051,8 +2054,10 @@ int tls_sw_recvmsg(struct sock *sk,
|
||||
if (err < 0)
|
||||
goto end;
|
||||
|
||||
/* process_rx_list() will set @control if it processed any records */
|
||||
copied = err;
|
||||
if (len <= copied || (copied && control != TLS_RECORD_TYPE_DATA) || rx_more)
|
||||
if (len <= copied || rx_more ||
|
||||
(control && control != TLS_RECORD_TYPE_DATA))
|
||||
goto end;
|
||||
|
||||
target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
|
||||
|
@ -10,7 +10,8 @@ TEST_PROGS := \
|
||||
mode-2-recovery-updelay.sh \
|
||||
bond_options.sh \
|
||||
bond-eth-type-change.sh \
|
||||
bond_macvlan_ipvlan.sh
|
||||
bond_macvlan_ipvlan.sh \
|
||||
bond_passive_lacp.sh
|
||||
|
||||
TEST_FILES := \
|
||||
lag_lib.sh \
|
||||
|
105
tools/testing/selftests/drivers/net/bonding/bond_passive_lacp.sh
Executable file
105
tools/testing/selftests/drivers/net/bonding/bond_passive_lacp.sh
Executable file
@ -0,0 +1,105 @@
|
||||
#!/bin/bash
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
#
|
||||
# Test if a bond interface works with lacp_active=off.
|
||||
|
||||
# shellcheck disable=SC2034
|
||||
REQUIRE_MZ=no
|
||||
NUM_NETIFS=0
|
||||
lib_dir=$(dirname "$0")
|
||||
# shellcheck disable=SC1091
|
||||
source "$lib_dir"/../../../net/forwarding/lib.sh
|
||||
|
||||
# shellcheck disable=SC2317
|
||||
check_port_state()
|
||||
{
|
||||
local netns=$1
|
||||
local port=$2
|
||||
local state=$3
|
||||
|
||||
ip -n "${netns}" -d -j link show "$port" | \
|
||||
jq -e ".[].linkinfo.info_slave_data.ad_actor_oper_port_state_str | index(\"${state}\") != null" > /dev/null
|
||||
}
|
||||
|
||||
check_pkt_count()
|
||||
{
|
||||
RET=0
|
||||
local ns="$1"
|
||||
local iface="$2"
|
||||
|
||||
# wait 65s, one per 30s
|
||||
slowwait_for_counter 65 2 tc_rule_handle_stats_get \
|
||||
"dev ${iface} egress" 101 ".packets" "-n ${ns}" &> /dev/null
|
||||
}
|
||||
|
||||
setup() {
|
||||
setup_ns c_ns s_ns
|
||||
|
||||
# shellcheck disable=SC2154
|
||||
ip -n "${c_ns}" link add eth0 type veth peer name eth0 netns "${s_ns}"
|
||||
ip -n "${c_ns}" link add eth1 type veth peer name eth1 netns "${s_ns}"
|
||||
|
||||
# Add tc filter to count the pkts
|
||||
tc -n "${c_ns}" qdisc add dev eth0 clsact
|
||||
tc -n "${c_ns}" filter add dev eth0 egress handle 101 protocol 0x8809 matchall action pass
|
||||
tc -n "${s_ns}" qdisc add dev eth1 clsact
|
||||
tc -n "${s_ns}" filter add dev eth1 egress handle 101 protocol 0x8809 matchall action pass
|
||||
|
||||
ip -n "${s_ns}" link add bond0 type bond mode 802.3ad lacp_active on lacp_rate fast
|
||||
ip -n "${s_ns}" link set eth0 master bond0
|
||||
ip -n "${s_ns}" link set eth1 master bond0
|
||||
|
||||
ip -n "${c_ns}" link add bond0 type bond mode 802.3ad lacp_active off lacp_rate fast
|
||||
ip -n "${c_ns}" link set eth0 master bond0
|
||||
ip -n "${c_ns}" link set eth1 master bond0
|
||||
|
||||
}
|
||||
|
||||
trap cleanup_all_ns EXIT
|
||||
setup
|
||||
|
||||
# The bond will send 2 lacpdu pkts during init time, let's wait at least 2s
|
||||
# after interface up
|
||||
ip -n "${c_ns}" link set bond0 up
|
||||
sleep 2
|
||||
|
||||
# 1. The passive side shouldn't send LACPDU.
|
||||
check_pkt_count "${c_ns}" "eth0" && RET=1
|
||||
log_test "802.3ad lacp_active off" "init port"
|
||||
|
||||
ip -n "${s_ns}" link set bond0 up
|
||||
# 2. The passive side should not have the 'active' flag.
|
||||
RET=0
|
||||
slowwait 2 check_port_state "${c_ns}" "eth0" "active" && RET=1
|
||||
log_test "802.3ad lacp_active off" "port state active"
|
||||
|
||||
# 3. The active side should have the 'active' flag.
|
||||
RET=0
|
||||
slowwait 2 check_port_state "${s_ns}" "eth0" "active" || RET=1
|
||||
log_test "802.3ad lacp_active on" "port state active"
|
||||
|
||||
# 4. Make sure the connection is not expired.
|
||||
RET=0
|
||||
slowwait 5 check_port_state "${s_ns}" "eth0" "distributing"
|
||||
slowwait 10 check_port_state "${s_ns}" "eth0" "expired" && RET=1
|
||||
log_test "bond 802.3ad lacp_active off" "port connection"
|
||||
|
||||
# After testing, disconnect one port on each side to check the state.
|
||||
ip -n "${s_ns}" link set eth0 nomaster
|
||||
ip -n "${s_ns}" link set eth0 up
|
||||
ip -n "${c_ns}" link set eth1 nomaster
|
||||
ip -n "${c_ns}" link set eth1 up
|
||||
# Due to Periodic Machine and Rx Machine state change, the bond will still
|
||||
# send lacpdu pkts in a few seconds. sleep at lease 5s to make sure
|
||||
# negotiation finished
|
||||
sleep 5
|
||||
|
||||
# 5. The active side should keep sending LACPDU.
|
||||
check_pkt_count "${s_ns}" "eth1" || RET=1
|
||||
log_test "bond 802.3ad lacp_active on" "port pkt after disconnect"
|
||||
|
||||
# 6. The passive side shouldn't send LACPDU anymore.
|
||||
check_pkt_count "${c_ns}" "eth0" && RET=1
|
||||
log_test "bond 802.3ad lacp_active off" "port pkt after disconnect"
|
||||
|
||||
exit "$EXIT_STATUS"
|
@ -6,6 +6,7 @@ CONFIG_MACVLAN=y
|
||||
CONFIG_IPVLAN=y
|
||||
CONFIG_NET_ACT_GACT=y
|
||||
CONFIG_NET_CLS_FLOWER=y
|
||||
CONFIG_NET_CLS_MATCHALL=m
|
||||
CONFIG_NET_SCH_INGRESS=y
|
||||
CONFIG_NLMON=y
|
||||
CONFIG_VETH=y
|
||||
|
@ -18,6 +18,8 @@
|
||||
# | 2001:db8:1::1/64 2001:db8:2::1/64 |
|
||||
# | |
|
||||
# +-----------------------------------------------------------------+
|
||||
#
|
||||
#shellcheck disable=SC2034 # SC doesn't see our uses of global variables
|
||||
|
||||
ALL_TESTS="
|
||||
ping_ipv4
|
||||
@ -27,6 +29,7 @@ ALL_TESTS="
|
||||
ipv4_sip_equal_dip
|
||||
ipv6_sip_equal_dip
|
||||
ipv4_dip_link_local
|
||||
ipv4_sip_link_local
|
||||
"
|
||||
|
||||
NUM_NETIFS=4
|
||||
@ -330,6 +333,32 @@ ipv4_dip_link_local()
|
||||
tc filter del dev $rp2 egress protocol ip pref 1 handle 101 flower
|
||||
}
|
||||
|
||||
ipv4_sip_link_local()
|
||||
{
|
||||
local sip=169.254.1.1
|
||||
|
||||
RET=0
|
||||
|
||||
# Disable rpfilter to prevent packets to be dropped because of it.
|
||||
sysctl_set net.ipv4.conf.all.rp_filter 0
|
||||
sysctl_set net.ipv4.conf."$rp1".rp_filter 0
|
||||
|
||||
tc filter add dev "$rp2" egress protocol ip pref 1 handle 101 \
|
||||
flower src_ip "$sip" action pass
|
||||
|
||||
$MZ "$h1" -t udp "sp=54321,dp=12345" -c 5 -d 1msec -b "$rp1mac" \
|
||||
-A "$sip" -B 198.51.100.2 -q
|
||||
|
||||
tc_check_packets "dev $rp2 egress" 101 5
|
||||
check_err $? "Packets were dropped"
|
||||
|
||||
log_test "IPv4 source IP is link-local"
|
||||
|
||||
tc filter del dev "$rp2" egress protocol ip pref 1 handle 101 flower
|
||||
sysctl_restore net.ipv4.conf."$rp1".rp_filter
|
||||
sysctl_restore net.ipv4.conf.all.rp_filter
|
||||
}
|
||||
|
||||
trap cleanup EXIT
|
||||
|
||||
setup_prepare
|
||||
|
@ -183,9 +183,10 @@ static void xgetaddrinfo(const char *node, const char *service,
|
||||
struct addrinfo *hints,
|
||||
struct addrinfo **res)
|
||||
{
|
||||
again:
|
||||
int err = getaddrinfo(node, service, hints, res);
|
||||
int err;
|
||||
|
||||
again:
|
||||
err = getaddrinfo(node, service, hints, res);
|
||||
if (err) {
|
||||
const char *errstr;
|
||||
|
||||
|
@ -75,9 +75,10 @@ static void xgetaddrinfo(const char *node, const char *service,
|
||||
struct addrinfo *hints,
|
||||
struct addrinfo **res)
|
||||
{
|
||||
again:
|
||||
int err = getaddrinfo(node, service, hints, res);
|
||||
int err;
|
||||
|
||||
again:
|
||||
err = getaddrinfo(node, service, hints, res);
|
||||
if (err) {
|
||||
const char *errstr;
|
||||
|
||||
|
@ -3842,6 +3842,7 @@ endpoint_tests()
|
||||
# remove and re-add
|
||||
if reset_with_events "delete re-add signal" &&
|
||||
mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then
|
||||
ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=0
|
||||
pm_nl_set_limits $ns1 0 3
|
||||
pm_nl_set_limits $ns2 3 3
|
||||
pm_nl_add_endpoint $ns1 10.0.2.1 id 1 flags signal
|
||||
|
@ -162,9 +162,10 @@ static void xgetaddrinfo(const char *node, const char *service,
|
||||
struct addrinfo *hints,
|
||||
struct addrinfo **res)
|
||||
{
|
||||
again:
|
||||
int err = getaddrinfo(node, service, hints, res);
|
||||
int err;
|
||||
|
||||
again:
|
||||
err = getaddrinfo(node, service, hints, res);
|
||||
if (err) {
|
||||
const char *errstr;
|
||||
|
||||
|
@ -198,6 +198,7 @@ set_limits 1 9 2>/dev/null
|
||||
check "get_limits" "${default_limits}" "subflows above hard limit"
|
||||
|
||||
set_limits 8 8
|
||||
flush_endpoint ## to make sure it doesn't affect the limits
|
||||
check "get_limits" "$(format_limits 8 8)" "set limits"
|
||||
|
||||
flush_endpoint
|
||||
|
@ -181,13 +181,12 @@ static int tls_send_cmsg(int fd, unsigned char record_type,
|
||||
return sendmsg(fd, &msg, flags);
|
||||
}
|
||||
|
||||
static int tls_recv_cmsg(struct __test_metadata *_metadata,
|
||||
int fd, unsigned char record_type,
|
||||
static int __tls_recv_cmsg(struct __test_metadata *_metadata,
|
||||
int fd, unsigned char *ctype,
|
||||
void *data, size_t len, int flags)
|
||||
{
|
||||
char cbuf[CMSG_SPACE(sizeof(char))];
|
||||
struct cmsghdr *cmsg;
|
||||
unsigned char ctype;
|
||||
struct msghdr msg;
|
||||
struct iovec vec;
|
||||
int n;
|
||||
@ -206,7 +205,20 @@ static int tls_recv_cmsg(struct __test_metadata *_metadata,
|
||||
EXPECT_NE(cmsg, NULL);
|
||||
EXPECT_EQ(cmsg->cmsg_level, SOL_TLS);
|
||||
EXPECT_EQ(cmsg->cmsg_type, TLS_GET_RECORD_TYPE);
|
||||
ctype = *((unsigned char *)CMSG_DATA(cmsg));
|
||||
if (ctype)
|
||||
*ctype = *((unsigned char *)CMSG_DATA(cmsg));
|
||||
|
||||
return n;
|
||||
}
|
||||
|
||||
static int tls_recv_cmsg(struct __test_metadata *_metadata,
|
||||
int fd, unsigned char record_type,
|
||||
void *data, size_t len, int flags)
|
||||
{
|
||||
unsigned char ctype;
|
||||
int n;
|
||||
|
||||
n = __tls_recv_cmsg(_metadata, fd, &ctype, data, len, flags);
|
||||
EXPECT_EQ(ctype, record_type);
|
||||
|
||||
return n;
|
||||
@ -2164,6 +2176,284 @@ TEST_F(tls, rekey_poll_delay)
|
||||
}
|
||||
}
|
||||
|
||||
struct raw_rec {
|
||||
unsigned int plain_len;
|
||||
unsigned char plain_data[100];
|
||||
unsigned int cipher_len;
|
||||
unsigned char cipher_data[128];
|
||||
};
|
||||
|
||||
/* TLS 1.2, AES_CCM, data, seqno:0, plaintext: 'Hello world' */
|
||||
static const struct raw_rec id0_data_l11 = {
|
||||
.plain_len = 11,
|
||||
.plain_data = {
|
||||
0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x77, 0x6f,
|
||||
0x72, 0x6c, 0x64,
|
||||
},
|
||||
.cipher_len = 40,
|
||||
.cipher_data = {
|
||||
0x17, 0x03, 0x03, 0x00, 0x23, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00, 0x00, 0x26, 0xa2, 0x33,
|
||||
0xde, 0x8d, 0x94, 0xf0, 0x29, 0x6c, 0xb1, 0xaf,
|
||||
0x6a, 0x75, 0xb2, 0x93, 0xad, 0x45, 0xd5, 0xfd,
|
||||
0x03, 0x51, 0x57, 0x8f, 0xf9, 0xcc, 0x3b, 0x42,
|
||||
},
|
||||
};
|
||||
|
||||
/* TLS 1.2, AES_CCM, ctrl, seqno:0, plaintext: '' */
|
||||
static const struct raw_rec id0_ctrl_l0 = {
|
||||
.plain_len = 0,
|
||||
.plain_data = {
|
||||
},
|
||||
.cipher_len = 29,
|
||||
.cipher_data = {
|
||||
0x16, 0x03, 0x03, 0x00, 0x18, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00, 0x00, 0x13, 0x38, 0x7b,
|
||||
0xa6, 0x1c, 0xdd, 0xa7, 0x19, 0x33, 0xab, 0xae,
|
||||
0x88, 0xe1, 0xd2, 0x08, 0x4f,
|
||||
},
|
||||
};
|
||||
|
||||
/* TLS 1.2, AES_CCM, data, seqno:0, plaintext: '' */
|
||||
static const struct raw_rec id0_data_l0 = {
|
||||
.plain_len = 0,
|
||||
.plain_data = {
|
||||
},
|
||||
.cipher_len = 29,
|
||||
.cipher_data = {
|
||||
0x17, 0x03, 0x03, 0x00, 0x18, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00, 0x00, 0xc5, 0x37, 0x90,
|
||||
0x70, 0x45, 0x89, 0xfb, 0x5c, 0xc7, 0x89, 0x03,
|
||||
0x68, 0x80, 0xd3, 0xd8, 0xcc,
|
||||
},
|
||||
};
|
||||
|
||||
/* TLS 1.2, AES_CCM, data, seqno:1, plaintext: 'Hello world' */
|
||||
static const struct raw_rec id1_data_l11 = {
|
||||
.plain_len = 11,
|
||||
.plain_data = {
|
||||
0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x77, 0x6f,
|
||||
0x72, 0x6c, 0x64,
|
||||
},
|
||||
.cipher_len = 40,
|
||||
.cipher_data = {
|
||||
0x17, 0x03, 0x03, 0x00, 0x23, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00, 0x01, 0x3a, 0x1a, 0x9c,
|
||||
0xd0, 0xa8, 0x9a, 0xd6, 0x69, 0xd6, 0x1a, 0xe3,
|
||||
0xb5, 0x1f, 0x0d, 0x2c, 0xe2, 0x97, 0x46, 0xff,
|
||||
0x2b, 0xcc, 0x5a, 0xc4, 0xa3, 0xb9, 0xef, 0xba,
|
||||
},
|
||||
};
|
||||
|
||||
/* TLS 1.2, AES_CCM, ctrl, seqno:1, plaintext: '' */
|
||||
static const struct raw_rec id1_ctrl_l0 = {
|
||||
.plain_len = 0,
|
||||
.plain_data = {
|
||||
},
|
||||
.cipher_len = 29,
|
||||
.cipher_data = {
|
||||
0x16, 0x03, 0x03, 0x00, 0x18, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00, 0x01, 0x3e, 0xf0, 0xfe,
|
||||
0xee, 0xd9, 0xe2, 0x5d, 0xc7, 0x11, 0x4c, 0xe6,
|
||||
0xb4, 0x7e, 0xef, 0x40, 0x2b,
|
||||
},
|
||||
};
|
||||
|
||||
/* TLS 1.2, AES_CCM, data, seqno:1, plaintext: '' */
|
||||
static const struct raw_rec id1_data_l0 = {
|
||||
.plain_len = 0,
|
||||
.plain_data = {
|
||||
},
|
||||
.cipher_len = 29,
|
||||
.cipher_data = {
|
||||
0x17, 0x03, 0x03, 0x00, 0x18, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00, 0x01, 0xce, 0xfc, 0x86,
|
||||
0xc8, 0xf0, 0x55, 0xf9, 0x47, 0x3f, 0x74, 0xdc,
|
||||
0xc9, 0xbf, 0xfe, 0x5b, 0xb1,
|
||||
},
|
||||
};
|
||||
|
||||
/* TLS 1.2, AES_CCM, ctrl, seqno:2, plaintext: 'Hello world' */
|
||||
static const struct raw_rec id2_ctrl_l11 = {
|
||||
.plain_len = 11,
|
||||
.plain_data = {
|
||||
0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x77, 0x6f,
|
||||
0x72, 0x6c, 0x64,
|
||||
},
|
||||
.cipher_len = 40,
|
||||
.cipher_data = {
|
||||
0x16, 0x03, 0x03, 0x00, 0x23, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00, 0x02, 0xe5, 0x3d, 0x19,
|
||||
0x3d, 0xca, 0xb8, 0x16, 0xb6, 0xff, 0x79, 0x87,
|
||||
0x2a, 0x04, 0x11, 0x3d, 0xf8, 0x64, 0x5f, 0x36,
|
||||
0x8b, 0xa8, 0xee, 0x4c, 0x6d, 0x62, 0xa5, 0x00,
|
||||
},
|
||||
};
|
||||
|
||||
/* TLS 1.2, AES_CCM, data, seqno:2, plaintext: 'Hello world' */
|
||||
static const struct raw_rec id2_data_l11 = {
|
||||
.plain_len = 11,
|
||||
.plain_data = {
|
||||
0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x77, 0x6f,
|
||||
0x72, 0x6c, 0x64,
|
||||
},
|
||||
.cipher_len = 40,
|
||||
.cipher_data = {
|
||||
0x17, 0x03, 0x03, 0x00, 0x23, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00, 0x02, 0xe5, 0x3d, 0x19,
|
||||
0x3d, 0xca, 0xb8, 0x16, 0xb6, 0xff, 0x79, 0x87,
|
||||
0x8e, 0xa1, 0xd0, 0xcd, 0x33, 0xb5, 0x86, 0x2b,
|
||||
0x17, 0xf1, 0x52, 0x2a, 0x55, 0x62, 0x65, 0x11,
|
||||
},
|
||||
};
|
||||
|
||||
/* TLS 1.2, AES_CCM, ctrl, seqno:2, plaintext: '' */
|
||||
static const struct raw_rec id2_ctrl_l0 = {
|
||||
.plain_len = 0,
|
||||
.plain_data = {
|
||||
},
|
||||
.cipher_len = 29,
|
||||
.cipher_data = {
|
||||
0x16, 0x03, 0x03, 0x00, 0x18, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00, 0x02, 0xdc, 0x5c, 0x0e,
|
||||
0x41, 0xdd, 0xba, 0xd3, 0xcc, 0xcf, 0x6d, 0xd9,
|
||||
0x06, 0xdb, 0x79, 0xe5, 0x5d,
|
||||
},
|
||||
};
|
||||
|
||||
/* TLS 1.2, AES_CCM, data, seqno:2, plaintext: '' */
|
||||
static const struct raw_rec id2_data_l0 = {
|
||||
.plain_len = 0,
|
||||
.plain_data = {
|
||||
},
|
||||
.cipher_len = 29,
|
||||
.cipher_data = {
|
||||
0x17, 0x03, 0x03, 0x00, 0x18, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00, 0x02, 0xc3, 0xca, 0x26,
|
||||
0x22, 0xe4, 0x25, 0xfb, 0x5f, 0x6d, 0xbf, 0x83,
|
||||
0x30, 0x48, 0x69, 0x1a, 0x47,
|
||||
},
|
||||
};
|
||||
|
||||
FIXTURE(zero_len)
|
||||
{
|
||||
int fd, cfd;
|
||||
bool notls;
|
||||
};
|
||||
|
||||
FIXTURE_VARIANT(zero_len)
|
||||
{
|
||||
const struct raw_rec *recs[4];
|
||||
ssize_t recv_ret[4];
|
||||
};
|
||||
|
||||
FIXTURE_VARIANT_ADD(zero_len, data_data_data)
|
||||
{
|
||||
.recs = { &id0_data_l11, &id1_data_l11, &id2_data_l11, },
|
||||
.recv_ret = { 33, -EAGAIN, },
|
||||
};
|
||||
|
||||
FIXTURE_VARIANT_ADD(zero_len, data_0ctrl_data)
|
||||
{
|
||||
.recs = { &id0_data_l11, &id1_ctrl_l0, &id2_data_l11, },
|
||||
.recv_ret = { 11, 0, 11, -EAGAIN, },
|
||||
};
|
||||
|
||||
FIXTURE_VARIANT_ADD(zero_len, 0data_0data_0data)
|
||||
{
|
||||
.recs = { &id0_data_l0, &id1_data_l0, &id2_data_l0, },
|
||||
.recv_ret = { -EAGAIN, },
|
||||
};
|
||||
|
||||
FIXTURE_VARIANT_ADD(zero_len, 0data_0data_ctrl)
|
||||
{
|
||||
.recs = { &id0_data_l0, &id1_data_l0, &id2_ctrl_l11, },
|
||||
.recv_ret = { 0, 11, -EAGAIN, },
|
||||
};
|
||||
|
||||
FIXTURE_VARIANT_ADD(zero_len, 0data_0data_0ctrl)
|
||||
{
|
||||
.recs = { &id0_data_l0, &id1_data_l0, &id2_ctrl_l0, },
|
||||
.recv_ret = { 0, 0, -EAGAIN, },
|
||||
};
|
||||
|
||||
FIXTURE_VARIANT_ADD(zero_len, 0ctrl_0ctrl_0ctrl)
|
||||
{
|
||||
.recs = { &id0_ctrl_l0, &id1_ctrl_l0, &id2_ctrl_l0, },
|
||||
.recv_ret = { 0, 0, 0, -EAGAIN, },
|
||||
};
|
||||
|
||||
FIXTURE_VARIANT_ADD(zero_len, 0data_0data_data)
|
||||
{
|
||||
.recs = { &id0_data_l0, &id1_data_l0, &id2_data_l11, },
|
||||
.recv_ret = { 11, -EAGAIN, },
|
||||
};
|
||||
|
||||
FIXTURE_VARIANT_ADD(zero_len, data_0data_0data)
|
||||
{
|
||||
.recs = { &id0_data_l11, &id1_data_l0, &id2_data_l0, },
|
||||
.recv_ret = { 11, -EAGAIN, },
|
||||
};
|
||||
|
||||
FIXTURE_SETUP(zero_len)
|
||||
{
|
||||
struct tls_crypto_info_keys tls12;
|
||||
int ret;
|
||||
|
||||
tls_crypto_info_init(TLS_1_2_VERSION, TLS_CIPHER_AES_CCM_128,
|
||||
&tls12, 0);
|
||||
|
||||
ulp_sock_pair(_metadata, &self->fd, &self->cfd, &self->notls);
|
||||
if (self->notls)
|
||||
return;
|
||||
|
||||
/* Don't install keys on fd, we'll send raw records */
|
||||
ret = setsockopt(self->cfd, SOL_TLS, TLS_RX, &tls12, tls12.len);
|
||||
ASSERT_EQ(ret, 0);
|
||||
}
|
||||
|
||||
FIXTURE_TEARDOWN(zero_len)
|
||||
{
|
||||
close(self->fd);
|
||||
close(self->cfd);
|
||||
}
|
||||
|
||||
TEST_F(zero_len, test)
|
||||
{
|
||||
const struct raw_rec *const *rec;
|
||||
unsigned char buf[128];
|
||||
int rec_off;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < 4 && variant->recs[i]; i++)
|
||||
EXPECT_EQ(send(self->fd, variant->recs[i]->cipher_data,
|
||||
variant->recs[i]->cipher_len, 0),
|
||||
variant->recs[i]->cipher_len);
|
||||
|
||||
rec = &variant->recs[0];
|
||||
rec_off = 0;
|
||||
for (i = 0; i < 4; i++) {
|
||||
int j, ret;
|
||||
|
||||
ret = variant->recv_ret[i] >= 0 ? variant->recv_ret[i] : -1;
|
||||
EXPECT_EQ(__tls_recv_cmsg(_metadata, self->cfd, NULL,
|
||||
buf, sizeof(buf), MSG_DONTWAIT), ret);
|
||||
if (ret == -1)
|
||||
EXPECT_EQ(errno, -variant->recv_ret[i]);
|
||||
if (variant->recv_ret[i] == -EAGAIN)
|
||||
break;
|
||||
|
||||
for (j = 0; j < ret; j++) {
|
||||
while (rec_off == (*rec)->plain_len) {
|
||||
rec++;
|
||||
rec_off = 0;
|
||||
}
|
||||
EXPECT_EQ(buf[j], (*rec)->plain_data[rec_off]);
|
||||
rec_off++;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
FIXTURE(tls_err)
|
||||
{
|
||||
int fd, cfd;
|
||||
@ -2748,17 +3038,18 @@ TEST(data_steal) {
|
||||
pid = fork();
|
||||
ASSERT_GE(pid, 0);
|
||||
if (!pid) {
|
||||
EXPECT_EQ(recv(cfd, buf, sizeof(buf), MSG_WAITALL),
|
||||
sizeof(buf));
|
||||
EXPECT_EQ(recv(cfd, buf, sizeof(buf) / 2, MSG_WAITALL),
|
||||
sizeof(buf) / 2);
|
||||
exit(!__test_passed(_metadata));
|
||||
}
|
||||
|
||||
usleep(2000);
|
||||
usleep(10000);
|
||||
ASSERT_EQ(setsockopt(fd, SOL_TLS, TLS_TX, &tls, tls.len), 0);
|
||||
ASSERT_EQ(setsockopt(cfd, SOL_TLS, TLS_RX, &tls, tls.len), 0);
|
||||
|
||||
EXPECT_EQ(send(fd, buf, sizeof(buf), 0), sizeof(buf));
|
||||
usleep(2000);
|
||||
EXPECT_EQ(wait(&status), pid);
|
||||
EXPECT_EQ(status, 0);
|
||||
EXPECT_EQ(recv(cfd, buf2, sizeof(buf2), MSG_DONTWAIT), -1);
|
||||
/* Don't check errno, the error will be different depending
|
||||
* on what random bytes TLS interpreted as the record length.
|
||||
@ -2766,9 +3057,6 @@ TEST(data_steal) {
|
||||
|
||||
close(fd);
|
||||
close(cfd);
|
||||
|
||||
EXPECT_EQ(wait(&status), pid);
|
||||
EXPECT_EQ(status, 0);
|
||||
}
|
||||
|
||||
static void __attribute__((constructor)) fips_check(void) {
|
||||
|
@ -185,6 +185,204 @@
|
||||
"$IP addr del 10.10.10.10/24 dev $DUMMY || true"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "34c0",
|
||||
"name": "Test TBF with HHF Backlog Accounting in gso_skb case against underflow",
|
||||
"category": [
|
||||
"qdisc",
|
||||
"tbf",
|
||||
"hhf"
|
||||
],
|
||||
"plugins": {
|
||||
"requires": [
|
||||
"nsPlugin"
|
||||
]
|
||||
},
|
||||
"setup": [
|
||||
"$IP link set dev $DUMMY up || true",
|
||||
"$IP addr add 10.10.11.10/24 dev $DUMMY || true",
|
||||
"$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms",
|
||||
"$TC qdisc replace dev $DUMMY handle 2: parent 1:1 hhf limit 1000",
|
||||
[
|
||||
"ping -I $DUMMY -c2 10.10.11.11",
|
||||
1
|
||||
],
|
||||
"$TC qdisc change dev $DUMMY handle 2: parent 1:1 hhf limit 1"
|
||||
],
|
||||
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1",
|
||||
"expExitCode": "0",
|
||||
"verifyCmd": "$TC -s qdisc show dev $DUMMY",
|
||||
"matchPattern": "backlog 0b 0p",
|
||||
"matchCount": "1",
|
||||
"teardown": [
|
||||
"$TC qdisc del dev $DUMMY handle 1: root"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "fd68",
|
||||
"name": "Test TBF with CODEL Backlog Accounting in gso_skb case against underflow",
|
||||
"category": [
|
||||
"qdisc",
|
||||
"tbf",
|
||||
"codel"
|
||||
],
|
||||
"plugins": {
|
||||
"requires": [
|
||||
"nsPlugin"
|
||||
]
|
||||
},
|
||||
"setup": [
|
||||
"$IP link set dev $DUMMY up || true",
|
||||
"$IP addr add 10.10.11.10/24 dev $DUMMY || true",
|
||||
"$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms",
|
||||
"$TC qdisc replace dev $DUMMY handle 2: parent 1:1 codel limit 1000",
|
||||
[
|
||||
"ping -I $DUMMY -c2 10.10.11.11",
|
||||
1
|
||||
],
|
||||
"$TC qdisc change dev $DUMMY handle 2: parent 1:1 codel limit 1"
|
||||
],
|
||||
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1",
|
||||
"expExitCode": "0",
|
||||
"verifyCmd": "$TC -s qdisc show dev $DUMMY",
|
||||
"matchPattern": "backlog 0b 0p",
|
||||
"matchCount": "1",
|
||||
"teardown": [
|
||||
"$TC qdisc del dev $DUMMY handle 1: root"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "514e",
|
||||
"name": "Test TBF with PIE Backlog Accounting in gso_skb case against underflow",
|
||||
"category": [
|
||||
"qdisc",
|
||||
"tbf",
|
||||
"pie"
|
||||
],
|
||||
"plugins": {
|
||||
"requires": [
|
||||
"nsPlugin"
|
||||
]
|
||||
},
|
||||
"setup": [
|
||||
"$IP link set dev $DUMMY up || true",
|
||||
"$IP addr add 10.10.11.10/24 dev $DUMMY || true",
|
||||
"$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms",
|
||||
"$TC qdisc replace dev $DUMMY handle 2: parent 1:1 pie limit 1000",
|
||||
[
|
||||
"ping -I $DUMMY -c2 10.10.11.11",
|
||||
1
|
||||
],
|
||||
"$TC qdisc change dev $DUMMY handle 2: parent 1:1 pie limit 1"
|
||||
],
|
||||
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1",
|
||||
"expExitCode": "0",
|
||||
"verifyCmd": "$TC -s qdisc show dev $DUMMY",
|
||||
"matchPattern": "backlog 0b 0p",
|
||||
"matchCount": "1",
|
||||
"teardown": [
|
||||
"$TC qdisc del dev $DUMMY handle 1: root"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "6c97",
|
||||
"name": "Test TBF with FQ Backlog Accounting in gso_skb case against underflow",
|
||||
"category": [
|
||||
"qdisc",
|
||||
"tbf",
|
||||
"fq"
|
||||
],
|
||||
"plugins": {
|
||||
"requires": [
|
||||
"nsPlugin"
|
||||
]
|
||||
},
|
||||
"setup": [
|
||||
"$IP link set dev $DUMMY up || true",
|
||||
"$IP addr add 10.10.11.10/24 dev $DUMMY || true",
|
||||
"$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms",
|
||||
"$TC qdisc replace dev $DUMMY handle 2: parent 1:1 fq limit 1000",
|
||||
[
|
||||
"ping -I $DUMMY -c2 10.10.11.11",
|
||||
1
|
||||
],
|
||||
"$TC qdisc change dev $DUMMY handle 2: parent 1:1 fq limit 1"
|
||||
],
|
||||
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1",
|
||||
"expExitCode": "0",
|
||||
"verifyCmd": "$TC -s qdisc show dev $DUMMY",
|
||||
"matchPattern": "backlog 0b 0p",
|
||||
"matchCount": "1",
|
||||
"teardown": [
|
||||
"$TC qdisc del dev $DUMMY handle 1: root"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "5d0b",
|
||||
"name": "Test TBF with FQ_CODEL Backlog Accounting in gso_skb case against underflow",
|
||||
"category": [
|
||||
"qdisc",
|
||||
"tbf",
|
||||
"fq_codel"
|
||||
],
|
||||
"plugins": {
|
||||
"requires": [
|
||||
"nsPlugin"
|
||||
]
|
||||
},
|
||||
"setup": [
|
||||
"$IP link set dev $DUMMY up || true",
|
||||
"$IP addr add 10.10.11.10/24 dev $DUMMY || true",
|
||||
"$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms",
|
||||
"$TC qdisc replace dev $DUMMY handle 2: parent 1:1 fq_codel limit 1000",
|
||||
[
|
||||
"ping -I $DUMMY -c2 10.10.11.11",
|
||||
1
|
||||
],
|
||||
"$TC qdisc change dev $DUMMY handle 2: parent 1:1 fq_codel limit 1"
|
||||
],
|
||||
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1",
|
||||
"expExitCode": "0",
|
||||
"verifyCmd": "$TC -s qdisc show dev $DUMMY",
|
||||
"matchPattern": "backlog 0b 0p",
|
||||
"matchCount": "1",
|
||||
"teardown": [
|
||||
"$TC qdisc del dev $DUMMY handle 1: root"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "21c3",
|
||||
"name": "Test TBF with FQ_PIE Backlog Accounting in gso_skb case against underflow",
|
||||
"category": [
|
||||
"qdisc",
|
||||
"tbf",
|
||||
"fq_pie"
|
||||
],
|
||||
"plugins": {
|
||||
"requires": [
|
||||
"nsPlugin"
|
||||
]
|
||||
},
|
||||
"setup": [
|
||||
"$IP link set dev $DUMMY up || true",
|
||||
"$IP addr add 10.10.11.10/24 dev $DUMMY || true",
|
||||
"$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms",
|
||||
"$TC qdisc replace dev $DUMMY handle 2: parent 1:1 fq_pie limit 1000",
|
||||
[
|
||||
"ping -I $DUMMY -c2 10.10.11.11",
|
||||
1
|
||||
],
|
||||
"$TC qdisc change dev $DUMMY handle 2: parent 1:1 fq_pie limit 1"
|
||||
],
|
||||
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1",
|
||||
"expExitCode": "0",
|
||||
"verifyCmd": "$TC -s qdisc show dev $DUMMY",
|
||||
"matchPattern": "backlog 0b 0p",
|
||||
"matchCount": "1",
|
||||
"teardown": [
|
||||
"$TC qdisc del dev $DUMMY handle 1: root"
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "a4bb",
|
||||
"name": "Test FQ_CODEL with HTB parent - force packet drop with empty queue",
|
||||
|
Loading…
Reference in New Issue
Block a user