ia64/linux-2.6.18-xen.hg

view drivers/net/forcedeth.c @ 897:329ea0ccb344

balloon: try harder to balloon up under memory pressure.

Currently if the balloon driver is unable to increase the guest's
reservation it assumes the failure was due to reaching its full
allocation, gives up on the ballooning operation and records the limit
it reached as the "hard limit". The driver will not try again until
the target is set again (even to the same value).

However it is possible that ballooning has in fact failed due to
memory pressure in the host and therefore it is desirable to keep
attempting to reach the target in case memory becomes available. The
most likely scenario is that some guests are ballooning down while
others are ballooning up and therefore there is temporary memory
pressure while things stabilise. You would not expect a well behaved
toolstack to ask a domain to balloon to more than its allocation nor
would you expect it to deliberately over-commit memory by setting
balloon targets which exceed the total host memory.

This patch drops the concept of a hard limit and causes the balloon
driver to retry increasing the reservation on a timer in the same
manner as when decreasing the reservation.

Also if we partially succeed in increasing the reservation
(i.e. receive less pages than we asked for) then we may as well keep
those pages rather than returning them to Xen.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
author Keir Fraser <keir.fraser@citrix.com>
date Fri Jun 05 14:01:20 2009 +0100 (2009-06-05)
parents 4ffa9ad54890
children
line source
1 /*
2 * forcedeth: Ethernet driver for NVIDIA nForce media access controllers.
3 *
4 * Note: This driver is a cleanroom reimplementation based on reverse
5 * engineered documentation written by Carl-Daniel Hailfinger
6 * and Andrew de Quincey. It's neither supported nor endorsed
7 * by NVIDIA Corp. Use at your own risk.
8 *
9 * NVIDIA, nForce and other NVIDIA marks are trademarks or registered
10 * trademarks of NVIDIA Corporation in the United States and other
11 * countries.
12 *
13 * Copyright (C) 2003,4,5 Manfred Spraul
14 * Copyright (C) 2004 Andrew de Quincey (wol support)
15 * Copyright (C) 2004 Carl-Daniel Hailfinger (invalid MAC handling, insane
16 * IRQ rate fixes, bigendian fixes, cleanups, verification)
17 * Copyright (c) 2004 NVIDIA Corporation
18 *
19 * This program is free software; you can redistribute it and/or modify
20 * it under the terms of the GNU General Public License as published by
21 * the Free Software Foundation; either version 2 of the License, or
22 * (at your option) any later version.
23 *
24 * This program is distributed in the hope that it will be useful,
25 * but WITHOUT ANY WARRANTY; without even the implied warranty of
26 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
27 * GNU General Public License for more details.
28 *
29 * You should have received a copy of the GNU General Public License
30 * along with this program; if not, write to the Free Software
31 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
32 *
33 * Changelog:
34 * 0.01: 05 Oct 2003: First release that compiles without warnings.
35 * 0.02: 05 Oct 2003: Fix bug for nv_drain_tx: do not try to free NULL skbs.
36 * Check all PCI BARs for the register window.
37 * udelay added to mii_rw.
38 * 0.03: 06 Oct 2003: Initialize dev->irq.
39 * 0.04: 07 Oct 2003: Initialize np->lock, reduce handled irqs, add printks.
40 * 0.05: 09 Oct 2003: printk removed again, irq status print tx_timeout.
41 * 0.06: 10 Oct 2003: MAC Address read updated, pff flag generation updated,
42 * irq mask updated
43 * 0.07: 14 Oct 2003: Further irq mask updates.
44 * 0.08: 20 Oct 2003: rx_desc.Length initialization added, nv_alloc_rx refill
45 * added into irq handler, NULL check for drain_ring.
46 * 0.09: 20 Oct 2003: Basic link speed irq implementation. Only handle the
47 * requested interrupt sources.
48 * 0.10: 20 Oct 2003: First cleanup for release.
49 * 0.11: 21 Oct 2003: hexdump for tx added, rx buffer sizes increased.
50 * MAC Address init fix, set_multicast cleanup.
51 * 0.12: 23 Oct 2003: Cleanups for release.
52 * 0.13: 25 Oct 2003: Limit for concurrent tx packets increased to 10.
53 * Set link speed correctly. start rx before starting
54 * tx (nv_start_rx sets the link speed).
55 * 0.14: 25 Oct 2003: Nic dependant irq mask.
56 * 0.15: 08 Nov 2003: fix smp deadlock with set_multicast_list during
57 * open.
58 * 0.16: 15 Nov 2003: include file cleanup for ppc64, rx buffer size
59 * increased to 1628 bytes.
60 * 0.17: 16 Nov 2003: undo rx buffer size increase. Substract 1 from
61 * the tx length.
62 * 0.18: 17 Nov 2003: fix oops due to late initialization of dev_stats
63 * 0.19: 29 Nov 2003: Handle RxNoBuf, detect & handle invalid mac
64 * addresses, really stop rx if already running
65 * in nv_start_rx, clean up a bit.
66 * 0.20: 07 Dec 2003: alloc fixes
67 * 0.21: 12 Jan 2004: additional alloc fix, nic polling fix.
68 * 0.22: 19 Jan 2004: reprogram timer to a sane rate, avoid lockup
69 * on close.
70 * 0.23: 26 Jan 2004: various small cleanups
71 * 0.24: 27 Feb 2004: make driver even less anonymous in backtraces
72 * 0.25: 09 Mar 2004: wol support
73 * 0.26: 03 Jun 2004: netdriver specific annotation, sparse-related fixes
74 * 0.27: 19 Jun 2004: Gigabit support, new descriptor rings,
75 * added CK804/MCP04 device IDs, code fixes
76 * for registers, link status and other minor fixes.
77 * 0.28: 21 Jun 2004: Big cleanup, making driver mostly endian safe
78 * 0.29: 31 Aug 2004: Add backup timer for link change notification.
79 * 0.30: 25 Sep 2004: rx checksum support for nf 250 Gb. Add rx reset
80 * into nv_close, otherwise reenabling for wol can
81 * cause DMA to kfree'd memory.
82 * 0.31: 14 Nov 2004: ethtool support for getting/setting link
83 * capabilities.
84 * 0.32: 16 Apr 2005: RX_ERROR4 handling added.
85 * 0.33: 16 May 2005: Support for MCP51 added.
86 * 0.34: 18 Jun 2005: Add DEV_NEED_LINKTIMER to all nForce nics.
87 * 0.35: 26 Jun 2005: Support for MCP55 added.
88 * 0.36: 28 Jun 2005: Add jumbo frame support.
89 * 0.37: 10 Jul 2005: Additional ethtool support, cleanup of pci id list
90 * 0.38: 16 Jul 2005: tx irq rewrite: Use global flags instead of
91 * per-packet flags.
92 * 0.39: 18 Jul 2005: Add 64bit descriptor support.
93 * 0.40: 19 Jul 2005: Add support for mac address change.
94 * 0.41: 30 Jul 2005: Write back original MAC in nv_close instead
95 * of nv_remove
96 * 0.42: 06 Aug 2005: Fix lack of link speed initialization
97 * in the second (and later) nv_open call
98 * 0.43: 10 Aug 2005: Add support for tx checksum.
99 * 0.44: 20 Aug 2005: Add support for scatter gather and segmentation.
100 * 0.45: 18 Sep 2005: Remove nv_stop/start_rx from every link check
101 * 0.46: 20 Oct 2005: Add irq optimization modes.
102 * 0.47: 26 Oct 2005: Add phyaddr 0 in phy scan.
103 * 0.48: 24 Dec 2005: Disable TSO, bugfix for pci_map_single
104 * 0.49: 10 Dec 2005: Fix tso for large buffers.
105 * 0.50: 20 Jan 2006: Add 8021pq tagging support.
106 * 0.51: 20 Jan 2006: Add 64bit consistent memory allocation for rings.
107 * 0.52: 20 Jan 2006: Add MSI/MSIX support.
108 * 0.53: 19 Mar 2006: Fix init from low power mode and add hw reset.
109 * 0.54: 21 Mar 2006: Fix spin locks for multi irqs and cleanup.
110 * 0.55: 22 Mar 2006: Add flow control (pause frame).
111 * 0.56: 22 Mar 2006: Additional ethtool and moduleparam support.
112 * 0.57: 14 May 2006: Moved mac address writes to nv_probe and nv_remove.
113 * 0.58: 20 May 2006: Optimized rx and tx data paths.
114 * 0.59: 31 May 2006: Added support for sideband management unit.
115 * 0.60: 31 May 2006: Added support for recoverable error.
116 * 0.61: 18 Jul 2006: Added support for suspend/resume.
117 * 0.62: 16 Jan 2007: Fixed statistics, mgmt communication, and low phy speed on S5.
118 *
119 * Known bugs:
120 * We suspect that on some hardware no TX done interrupts are generated.
121 * This means recovery from netif_stop_queue only happens if the hw timer
122 * interrupt fires (100 times/second, configurable with NVREG_POLL_DEFAULT)
123 * and the timer is active in the IRQMask, or if a rx packet arrives by chance.
124 * If your hardware reliably generates tx done interrupts, then you can remove
125 * DEV_NEED_TIMERIRQ from the driver_data flags.
126 * DEV_NEED_TIMERIRQ will not harm you on sane hardware, only generating a few
127 * superfluous timer interrupts from the nic.
128 */
129 #define FORCEDETH_VERSION "0.62-Driver Package V1.25"
130 #define DRV_NAME "forcedeth"
131 #define DRV_DATE "2008/01/30"
133 #include <linux/module.h>
134 #include <linux/types.h>
135 #include <linux/pci.h>
136 #include <linux/interrupt.h>
137 #include <linux/netdevice.h>
138 #include <linux/etherdevice.h>
139 #include <linux/delay.h>
140 #include <linux/spinlock.h>
141 #include <linux/ethtool.h>
142 #include <linux/timer.h>
143 #include <linux/skbuff.h>
144 #include <linux/mii.h>
145 #include <linux/random.h>
146 #include <linux/init.h>
147 #include <linux/if_vlan.h>
148 #include <linux/rtnetlink.h>
149 #include <linux/reboot.h>
150 #include <linux/version.h>
152 #define RHES3 0
153 #define SLES9 1
154 #define RHES4 2
155 #define SUSE10 3
156 #define FEDORA5 4
157 #define FEDORA6 5
158 #define SLES10U1 5
159 #define FEDORA7 6
160 #define OPENSUSE10U3 7
161 #define NVNEW 8
163 #if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,22)
164 #define NVVER NVNEW
165 #elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,21)
166 #define NVVER OPENSUSE10U3
167 #elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,18)
168 #define NVVER FEDORA7
169 #elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,17)
170 #define NVVER FEDORA6
171 #elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,13)
172 #define NVVER FEDORA5
173 #elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,9)
174 #define NVVER SUSE10
175 #elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,6)
176 #define NVVER RHES4
177 #elif LINUX_VERSION_CODE > KERNEL_VERSION(2,6,0)
178 #define NVVER SLES9
179 #else
180 #define NVVER RHES3
181 #endif
183 #if NVVER > RHES3
184 #include <linux/dma-mapping.h>
185 #else
186 #include <linux/forcedeth-compat.h>
187 #endif
189 #include <asm/irq.h>
190 #include <asm/io.h>
191 #include <asm/uaccess.h>
192 #include <asm/system.h>
194 #ifdef NVLAN_DEBUG
195 #define dprintk printk
196 #else
197 #define dprintk(x...) do { } while (0)
198 #endif
200 #define DPRINTK(nlevel,klevel,args...) (void)((debug & NETIF_MSG_##nlevel) && printk(klevel args))
202 /* pci_ids.h */
203 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_12
204 #define PCI_DEVICE_ID_NVIDIA_NVENET_12 0x0268
205 #endif
207 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_13
208 #define PCI_DEVICE_ID_NVIDIA_NVENET_13 0x0269
209 #endif
211 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_14
212 #define PCI_DEVICE_ID_NVIDIA_NVENET_14 0x0372
213 #endif
215 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_15
216 #define PCI_DEVICE_ID_NVIDIA_NVENET_15 0x0373
217 #endif
219 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_16
220 #define PCI_DEVICE_ID_NVIDIA_NVENET_16 0x03E5
221 #endif
223 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_17
224 #define PCI_DEVICE_ID_NVIDIA_NVENET_17 0x03E6
225 #endif
227 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_18
228 #define PCI_DEVICE_ID_NVIDIA_NVENET_18 0x03EE
229 #endif
231 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_19
232 #define PCI_DEVICE_ID_NVIDIA_NVENET_19 0x03EF
233 #endif
235 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_20
236 #define PCI_DEVICE_ID_NVIDIA_NVENET_20 0x0450
237 #endif
239 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_21
240 #define PCI_DEVICE_ID_NVIDIA_NVENET_21 0x0451
241 #endif
243 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_22
244 #define PCI_DEVICE_ID_NVIDIA_NVENET_22 0x0452
245 #endif
247 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_23
248 #define PCI_DEVICE_ID_NVIDIA_NVENET_23 0x0453
249 #endif
251 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_24
252 #define PCI_DEVICE_ID_NVIDIA_NVENET_24 0x054c
253 #endif
255 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_25
256 #define PCI_DEVICE_ID_NVIDIA_NVENET_25 0x054d
257 #endif
259 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_26
260 #define PCI_DEVICE_ID_NVIDIA_NVENET_26 0x054e
261 #endif
263 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_27
264 #define PCI_DEVICE_ID_NVIDIA_NVENET_27 0x054f
265 #endif
267 /* mii.h */
268 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_28
269 #define PCI_DEVICE_ID_NVIDIA_NVENET_28 0x07dc
270 #endif
272 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_29
273 #define PCI_DEVICE_ID_NVIDIA_NVENET_29 0x07dd
274 #endif
276 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_30
277 #define PCI_DEVICE_ID_NVIDIA_NVENET_30 0x07de
278 #endif
280 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_31
281 #define PCI_DEVICE_ID_NVIDIA_NVENET_31 0x07df
282 #endif
284 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_32
285 #define PCI_DEVICE_ID_NVIDIA_NVENET_32 0x0760
286 #endif
288 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_33
289 #define PCI_DEVICE_ID_NVIDIA_NVENET_33 0x0761
290 #endif
292 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_34
293 #define PCI_DEVICE_ID_NVIDIA_NVENET_34 0x0762
294 #endif
296 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_35
297 #define PCI_DEVICE_ID_NVIDIA_NVENET_35 0x0763
298 #endif
300 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_36
301 #define PCI_DEVICE_ID_NVIDIA_NVENET_36 0x0AB0
302 #endif
304 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_37
305 #define PCI_DEVICE_ID_NVIDIA_NVENET_37 0x0AB1
306 #endif
308 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_38
309 #define PCI_DEVICE_ID_NVIDIA_NVENET_38 0x0AB2
310 #endif
312 #ifndef PCI_DEVICE_ID_NVIDIA_NVENET_39
313 #define PCI_DEVICE_ID_NVIDIA_NVENET_39 0x0AB3
314 #endif
316 #ifndef ADVERTISE_1000HALF
317 #define ADVERTISE_1000HALF 0x0100
318 #endif
319 #ifndef ADVERTISE_1000FULL
320 #define ADVERTISE_1000FULL 0x0200
321 #endif
322 #ifndef ADVERTISE_PAUSE_CAP
323 #define ADVERTISE_PAUSE_CAP 0x0400
324 #endif
325 #ifndef ADVERTISE_PAUSE_ASYM
326 #define ADVERTISE_PAUSE_ASYM 0x0800
327 #endif
328 #ifndef MII_CTRL1000
329 #define MII_CTRL1000 0x09
330 #endif
331 #ifndef MII_STAT1000
332 #define MII_STAT1000 0x0A
333 #endif
334 #ifndef LPA_1000FULL
335 #define LPA_1000FULL 0x0800
336 #endif
337 #ifndef LPA_1000HALF
338 #define LPA_1000HALF 0x0400
339 #endif
340 #ifndef LPA_PAUSE_CAP
341 #define LPA_PAUSE_CAP 0x0400
342 #endif
343 #ifndef LPA_PAUSE_ASYM
344 #define LPA_PAUSE_ASYM 0x0800
345 #endif
346 #ifndef BMCR_SPEED1000
347 #define BMCR_SPEED1000 0x0040 /* MSB of Speed (1000) */
348 #endif
350 #ifndef NETDEV_TX_OK
351 #define NETDEV_TX_OK 0 /* driver took care of packet */
352 #endif
354 #ifndef NETDEV_TX_BUSY
355 #define NETDEV_TX_BUSY 1 /* driver tx path was busy*/
356 #endif
358 #ifndef DMA_39BIT_MASK
359 #define DMA_39BIT_MASK 0x0000007fffffffffULL
360 #endif
362 #ifndef __iomem
363 #define __iomem
364 #endif
366 #ifndef __bitwise
367 #define __bitwise
368 #endif
370 #ifndef __force
371 #define __force
372 #endif
374 #ifndef PCI_D0
375 #define PCI_D0 ((int __bitwise __force) 0)
376 #endif
378 #ifndef PM_EVENT_SUSPEND
379 #define PM_EVENT_SUSPEND 2
380 #endif
382 #ifndef MODULE_VERSION
383 #define MODULE_VERSION(ver)
384 #endif
386 #if NVVER > FEDORA6
387 #define CHECKSUM_HW CHECKSUM_PARTIAL
388 #endif
390 #if NVVER < SUSE10
391 #define pm_message_t u32
392 #endif
394 /* rx/tx mac addr + type + vlan + align + slack*/
395 #ifndef RX_NIC_BUFSIZE
396 #define RX_NIC_BUFSIZE (ETH_DATA_LEN + 64)
397 #endif
398 /* even more slack */
399 #ifndef RX_ALLOC_BUFSIZE
400 #define RX_ALLOC_BUFSIZE (ETH_DATA_LEN + 128)
401 #endif
403 #ifndef PCI_DEVICE
404 #define PCI_DEVICE(vend,dev) \
405 .vendor = (vend), .device = (dev), \
406 .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID
407 #endif
409 #if NVVER < RHES4
410 struct msix_entry {
411 u16 vector; /* kernel uses to write allocated vector */
412 u16 entry; /* driver uses to specify entry, OS writes */
413 };
414 #endif
416 #ifndef PCI_MSIX_ENTRY_LOWER_ADDR_OFFSET
417 #define PCI_MSIX_ENTRY_LOWER_ADDR_OFFSET 0x00
418 #endif
420 #ifndef PCI_MSIX_ENTRY_UPPER_ADDR_OFFSET
421 #define PCI_MSIX_ENTRY_UPPER_ADDR_OFFSET 0x04
422 #endif
424 #ifndef PCI_MSIX_ENTRY_DATA_OFFSET
425 #define PCI_MSIX_ENTRY_DATA_OFFSET 0x08
426 #endif
428 #ifndef PCI_MSIX_ENTRY_SIZE
429 #define PCI_MSIX_ENTRY_SIZE 16
430 #endif
432 #ifndef PCI_MSIX_FLAGS_BIRMASK
433 #define PCI_MSIX_FLAGS_BIRMASK (7 << 0)
434 #endif
436 #ifndef PCI_CAP_ID_MSIX
437 #define PCI_CAP_ID_MSIX 0x11
438 #endif
440 #if NVVER > FEDORA7
441 #define IRQ_FLAG IRQF_SHARED
442 #else
443 #define IRQ_FLAG SA_SHIRQ
444 #endif
446 /*
447 * Hardware access:
448 */
450 #define DEV_NEED_TIMERIRQ 0x00001 /* set the timer irq flag in the irq mask */
451 #define DEV_NEED_LINKTIMER 0x00002 /* poll link settings. Relies on the timer irq */
452 #define DEV_HAS_LARGEDESC 0x00004 /* device supports jumbo frames and needs packet format 2 */
453 #define DEV_HAS_HIGH_DMA 0x00008 /* device supports 64bit dma */
454 #define DEV_HAS_CHECKSUM 0x00010 /* device supports tx and rx checksum offloads */
455 #define DEV_HAS_VLAN 0x00020 /* device supports vlan tagging and striping */
456 #define DEV_HAS_MSI 0x00040 /* device supports MSI */
457 #define DEV_HAS_MSI_X 0x00080 /* device supports MSI-X */
458 #define DEV_HAS_POWER_CNTRL 0x00100 /* device supports power savings */
459 #define DEV_HAS_STATISTICS_V1 0x00200 /* device supports hw statistics version 1 */
460 #define DEV_HAS_STATISTICS_V2 0x00400 /* device supports hw statistics version 2 */
461 #define DEV_HAS_TEST_EXTENDED 0x00800 /* device supports extended diagnostic test */
462 #define DEV_HAS_MGMT_UNIT 0x01000 /* device supports management unit */
463 #define DEV_HAS_CORRECT_MACADDR 0x02000 /* device supports correct mac address */
464 #define DEV_HAS_COLLISION_FIX 0x04000 /* device supports tx collision fix */
465 #define DEV_HAS_PAUSEFRAME_TX_V1 0x08000 /* device supports tx pause frames version 1 */
466 #define DEV_HAS_PAUSEFRAME_TX_V2 0x10000 /* device supports tx pause frames version 2 */
467 #define DEV_HAS_PAUSEFRAME_TX_V3 0x20000 /* device supports tx pause frames version 3 */
470 #define NVIDIA_ETHERNET_ID(deviceid,nv_driver_data) {\
471 .vendor = PCI_VENDOR_ID_NVIDIA, \
472 .device = deviceid, \
473 .subvendor = PCI_ANY_ID, \
474 .subdevice = PCI_ANY_ID, \
475 .driver_data = nv_driver_data, \
476 },
478 #define Mv_LED_Control 16
479 #define Mv_Page_Address 22
480 #define Mv_LED_FORCE_OFF 0x88
481 #define Mv_LED_DUAL_MODE3 0x40
483 struct nvmsi_msg{
484 u32 address_lo;
485 u32 address_hi;
486 u32 data;
487 };
489 enum {
490 NvRegIrqStatus = 0x000,
491 #define NVREG_IRQSTAT_MIIEVENT 0x040
492 #define NVREG_IRQSTAT_MASK 0x81ff
493 NvRegIrqMask = 0x004,
494 #define NVREG_IRQ_RX_ERROR 0x0001
495 #define NVREG_IRQ_RX 0x0002
496 #define NVREG_IRQ_RX_NOBUF 0x0004
497 #define NVREG_IRQ_TX_ERR 0x0008
498 #define NVREG_IRQ_TX_OK 0x0010
499 #define NVREG_IRQ_TIMER 0x0020
500 #define NVREG_IRQ_LINK 0x0040
501 #define NVREG_IRQ_RX_FORCED 0x0080
502 #define NVREG_IRQ_TX_FORCED 0x0100
503 #define NVREG_IRQ_RECOVER_ERROR 0x8000
504 #define NVREG_IRQMASK_THROUGHPUT 0x00df
505 #define NVREG_IRQMASK_CPU 0x0060
506 #define NVREG_IRQ_TX_ALL (NVREG_IRQ_TX_ERR|NVREG_IRQ_TX_OK|NVREG_IRQ_TX_FORCED)
507 #define NVREG_IRQ_RX_ALL (NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_RX_FORCED)
508 #define NVREG_IRQ_OTHER (NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_RECOVER_ERROR)
510 #define NVREG_IRQ_UNKNOWN (~(NVREG_IRQ_RX_ERROR|NVREG_IRQ_RX|NVREG_IRQ_RX_NOBUF|NVREG_IRQ_TX_ERR| \
511 NVREG_IRQ_TX_OK|NVREG_IRQ_TIMER|NVREG_IRQ_LINK|NVREG_IRQ_RX_FORCED| \
512 NVREG_IRQ_TX_FORCED|NVREG_IRQ_RECOVER_ERROR))
514 NvRegUnknownSetupReg6 = 0x008,
515 #define NVREG_UNKSETUP6_VAL 3
517 /*
518 * NVREG_POLL_DEFAULT is the interval length of the timer source on the nic
519 * NVREG_POLL_DEFAULT=97 would result in an interval length of 1 ms
520 */
521 NvRegPollingInterval = 0x00c,
522 #define NVREG_POLL_DEFAULT_THROUGHPUT 970
523 #define NVREG_POLL_DEFAULT_CPU 13
524 NvRegMSIMap0 = 0x020,
525 NvRegMSIMap1 = 0x024,
526 NvRegMSIIrqMask = 0x030,
527 #define NVREG_MSI_VECTOR_0_ENABLED 0x01
528 NvRegMisc1 = 0x080,
529 #define NVREG_MISC1_PAUSE_TX 0x01
530 #define NVREG_MISC1_HD 0x02
531 #define NVREG_MISC1_FORCE 0x3b0f3c
533 NvRegMacReset = 0x34,
534 #define NVREG_MAC_RESET_ASSERT 0x0F3
535 NvRegTransmitterControl = 0x084,
536 #define NVREG_XMITCTL_START 0x01
537 #define NVREG_XMITCTL_MGMT_ST 0x40000000
538 #define NVREG_XMITCTL_SYNC_MASK 0x000f0000
539 #define NVREG_XMITCTL_SYNC_NOT_READY 0x0
540 #define NVREG_XMITCTL_SYNC_PHY_INIT 0x00040000
541 #define NVREG_XMITCTL_MGMT_SEMA_MASK 0x00000f00
542 #define NVREG_XMITCTL_MGMT_SEMA_FREE 0x0
543 #define NVREG_XMITCTL_HOST_SEMA_MASK 0x0000f000
544 #define NVREG_XMITCTL_HOST_SEMA_ACQ 0x0000f000
545 #define NVREG_XMITCTL_HOST_LOADED 0x00004000
546 #define NVREG_XMITCTL_TX_PATH_EN 0x01000000
547 NvRegTransmitterStatus = 0x088,
548 #define NVREG_XMITSTAT_BUSY 0x01
550 NvRegPacketFilterFlags = 0x8c,
551 #define NVREG_PFF_PAUSE_RX 0x08
552 #define NVREG_PFF_ALWAYS 0x7F0000
553 #define NVREG_PFF_PROMISC 0x80
554 #define NVREG_PFF_MYADDR 0x20
555 #define NVREG_PFF_LOOPBACK 0x10
557 NvRegOffloadConfig = 0x90,
558 #define NVREG_OFFLOAD_HOMEPHY 0x601
559 #define NVREG_OFFLOAD_NORMAL RX_NIC_BUFSIZE
560 NvRegReceiverControl = 0x094,
561 #define NVREG_RCVCTL_START 0x01
562 #define NVREG_RCVCTL_RX_PATH_EN 0x01000000
563 NvRegReceiverStatus = 0x98,
564 #define NVREG_RCVSTAT_BUSY 0x01
566 NvRegRandomSeed = 0x9c,
567 #define NVREG_RNDSEED_MASK 0x00ff
568 #define NVREG_RNDSEED_FORCE 0x7f00
569 #define NVREG_RNDSEED_FORCE2 0x2d00
570 #define NVREG_RNDSEED_FORCE3 0x7400
572 NvRegTxDeferral = 0xA0,
573 #define NVREG_TX_DEFERRAL_DEFAULT 0x15050f
574 #define NVREG_TX_DEFERRAL_RGMII_10_100 0x16070f
575 #define NVREG_TX_DEFERRAL_RGMII_1000 0x14050f
576 NvRegRxDeferral = 0xA4,
577 #define NVREG_RX_DEFERRAL_DEFAULT 0x16
578 NvRegMacAddrA = 0xA8,
579 NvRegMacAddrB = 0xAC,
580 NvRegMulticastAddrA = 0xB0,
581 #define NVREG_MCASTADDRA_FORCE 0x01
582 NvRegMulticastAddrB = 0xB4,
583 NvRegMulticastMaskA = 0xB8,
584 #define NVREG_MCASTMASKA_NONE 0xffffffff
585 NvRegMulticastMaskB = 0xBC,
586 #define NVREG_MCASTMASKB_NONE 0xffff
588 NvRegPhyInterface = 0xC0,
589 #define PHY_RGMII 0x10000000
591 NvRegTxRingPhysAddr = 0x100,
592 NvRegRxRingPhysAddr = 0x104,
593 NvRegRingSizes = 0x108,
594 #define NVREG_RINGSZ_TXSHIFT 0
595 #define NVREG_RINGSZ_RXSHIFT 16
596 NvRegTransmitPoll = 0x10c,
597 #define NVREG_TRANSMITPOLL_MAC_ADDR_REV 0x00008000
598 NvRegLinkSpeed = 0x110,
599 #define NVREG_LINKSPEED_FORCE 0x10000
600 #define NVREG_LINKSPEED_10 1000
601 #define NVREG_LINKSPEED_100 100
602 #define NVREG_LINKSPEED_1000 50
603 #define NVREG_LINKSPEED_MASK (0xFFF)
604 NvRegUnknownSetupReg5 = 0x130,
605 #define NVREG_UNKSETUP5_BIT31 (1<<31)
606 NvRegTxWatermark = 0x13c,
607 #define NVREG_TX_WM_DESC1_DEFAULT 0x0200010
608 #define NVREG_TX_WM_DESC2_3_DEFAULT 0x1e08000
609 #define NVREG_TX_WM_DESC2_3_1000 0xfe08000
610 NvRegTxRxControl = 0x144,
611 #define NVREG_TXRXCTL_KICK 0x0001
612 #define NVREG_TXRXCTL_BIT1 0x0002
613 #define NVREG_TXRXCTL_BIT2 0x0004
614 #define NVREG_TXRXCTL_IDLE 0x0008
615 #define NVREG_TXRXCTL_RESET 0x0010
616 #define NVREG_TXRXCTL_RXCHECK 0x0400
617 #define NVREG_TXRXCTL_DESC_1 0
618 #define NVREG_TXRXCTL_DESC_2 0x002100
619 #define NVREG_TXRXCTL_DESC_3 0xc02200
620 #define NVREG_TXRXCTL_VLANSTRIP 0x00040
621 #define NVREG_TXRXCTL_VLANINS 0x00080
622 NvRegTxRingPhysAddrHigh = 0x148,
623 NvRegRxRingPhysAddrHigh = 0x14C,
624 NvRegTxPauseFrame = 0x170,
625 #define NVREG_TX_PAUSEFRAME_DISABLE 0x01ff0080
626 #define NVREG_TX_PAUSEFRAME_ENABLE_V1 0x01800010
627 #define NVREG_TX_PAUSEFRAME_ENABLE_V2 0x056003f0
628 #define NVREG_TX_PAUSEFRAME_ENABLE_V3 0x09f00880
629 NvRegMIIStatus = 0x180,
630 #define NVREG_MIISTAT_ERROR 0x0001
631 #define NVREG_MIISTAT_LINKCHANGE 0x0008
632 #define NVREG_MIISTAT_MASK_RW 0x0007
633 #define NVREG_MIISTAT_MASK_ALL 0x000f
634 NvRegMIIMask = 0x184,
635 #define NVREG_MII_LINKCHANGE 0x0008
637 NvRegAdapterControl = 0x188,
638 #define NVREG_ADAPTCTL_START 0x02
639 #define NVREG_ADAPTCTL_LINKUP 0x04
640 #define NVREG_ADAPTCTL_PHYVALID 0x40000
641 #define NVREG_ADAPTCTL_RUNNING 0x100000
642 #define NVREG_ADAPTCTL_PHYSHIFT 24
643 NvRegMIISpeed = 0x18c,
644 #define NVREG_MIISPEED_BIT8 (1<<8)
645 #define NVREG_MIIDELAY 5
646 NvRegMIIControl = 0x190,
647 #define NVREG_MIICTL_INUSE 0x08000
648 #define NVREG_MIICTL_WRITE 0x00400
649 #define NVREG_MIICTL_ADDRSHIFT 5
650 NvRegMIIData = 0x194,
651 NvRegWakeUpFlags = 0x200,
652 #define NVREG_WAKEUPFLAGS_VAL 0x7770
653 #define NVREG_WAKEUPFLAGS_BUSYSHIFT 24
654 #define NVREG_WAKEUPFLAGS_ENABLESHIFT 16
655 #define NVREG_WAKEUPFLAGS_D3SHIFT 12
656 #define NVREG_WAKEUPFLAGS_D2SHIFT 8
657 #define NVREG_WAKEUPFLAGS_D1SHIFT 4
658 #define NVREG_WAKEUPFLAGS_D0SHIFT 0
659 #define NVREG_WAKEUPFLAGS_ACCEPT_MAGPAT 0x01
660 #define NVREG_WAKEUPFLAGS_ACCEPT_WAKEUPPAT 0x02
661 #define NVREG_WAKEUPFLAGS_ACCEPT_LINKCHANGE 0x04
662 #define NVREG_WAKEUPFLAGS_ENABLE 0x1111
664 NvRegPatternCRC = 0x204,
665 #define NV_UNKNOWN_VAL 0x01
666 NvRegPatternMask = 0x208,
667 NvRegPowerCap = 0x268,
668 #define NVREG_POWERCAP_D3SUPP (1<<30)
669 #define NVREG_POWERCAP_D2SUPP (1<<26)
670 #define NVREG_POWERCAP_D1SUPP (1<<25)
671 NvRegPowerState = 0x26c,
672 #define NVREG_POWERSTATE_POWEREDUP 0x8000
673 #define NVREG_POWERSTATE_VALID 0x0100
674 #define NVREG_POWERSTATE_MASK 0x0003
675 #define NVREG_POWERSTATE_D0 0x0000
676 #define NVREG_POWERSTATE_D1 0x0001
677 #define NVREG_POWERSTATE_D2 0x0002
678 #define NVREG_POWERSTATE_D3 0x0003
679 NvRegTxCnt = 0x280,
680 NvRegTxZeroReXmt = 0x284,
681 NvRegTxOneReXmt = 0x288,
682 NvRegTxManyReXmt = 0x28c,
683 NvRegTxLateCol = 0x290,
684 NvRegTxUnderflow = 0x294,
685 NvRegTxLossCarrier = 0x298,
686 NvRegTxExcessDef = 0x29c,
687 NvRegTxRetryErr = 0x2a0,
688 NvRegRxFrameErr = 0x2a4,
689 NvRegRxExtraByte = 0x2a8,
690 NvRegRxLateCol = 0x2ac,
691 NvRegRxRunt = 0x2b0,
692 NvRegRxFrameTooLong = 0x2b4,
693 NvRegRxOverflow = 0x2b8,
694 NvRegRxFCSErr = 0x2bc,
695 NvRegRxFrameAlignErr = 0x2c0,
696 NvRegRxLenErr = 0x2c4,
697 NvRegRxUnicast = 0x2c8,
698 NvRegRxMulticast = 0x2cc,
699 NvRegRxBroadcast = 0x2d0,
700 NvRegTxDef = 0x2d4,
701 NvRegTxFrame = 0x2d8,
702 NvRegRxCnt = 0x2dc,
703 NvRegTxPause = 0x2e0,
704 NvRegRxPause = 0x2e4,
705 NvRegRxDropFrame = 0x2e8,
707 NvRegVlanControl = 0x300,
708 #define NVREG_VLANCONTROL_ENABLE 0x2000
709 NvRegMSIXMap0 = 0x3e0,
710 NvRegMSIXMap1 = 0x3e4,
711 NvRegMSIXIrqStatus = 0x3f0,
713 NvRegPowerState2 = 0x600,
714 #define NVREG_POWERSTATE2_POWERUP_MASK 0x0F11
715 #define NVREG_POWERSTATE2_POWERUP_REV_A3 0x0001
716 };
718 /* Big endian: should work, but is untested */
719 struct ring_desc {
720 u32 PacketBuffer;
721 u32 FlagLen;
722 };
724 struct ring_desc_ex {
725 u32 PacketBufferHigh;
726 u32 PacketBufferLow;
727 u32 TxVlan;
728 u32 FlagLen;
729 };
731 typedef union _ring_type {
732 struct ring_desc* orig;
733 struct ring_desc_ex* ex;
734 } ring_type;
736 #define FLAG_MASK_V1 0xffff0000
737 #define FLAG_MASK_V2 0xffffc000
738 #define LEN_MASK_V1 (0xffffffff ^ FLAG_MASK_V1)
739 #define LEN_MASK_V2 (0xffffffff ^ FLAG_MASK_V2)
741 #define NV_TX_LASTPACKET (1<<16)
742 #define NV_TX_RETRYERROR (1<<19)
743 #define NV_TX_FORCED_INTERRUPT (1<<24)
744 #define NV_TX_DEFERRED (1<<26)
745 #define NV_TX_CARRIERLOST (1<<27)
746 #define NV_TX_LATECOLLISION (1<<28)
747 #define NV_TX_UNDERFLOW (1<<29)
748 #define NV_TX_ERROR (1<<30) /* logical OR of all errors */
749 #define NV_TX_VALID (1<<31)
751 #define NV_TX2_LASTPACKET (1<<29)
752 #define NV_TX2_RETRYERROR (1<<18)
753 #define NV_TX2_FORCED_INTERRUPT (1<<30)
754 #define NV_TX2_DEFERRED (1<<25)
755 #define NV_TX2_CARRIERLOST (1<<26)
756 #define NV_TX2_LATECOLLISION (1<<27)
757 #define NV_TX2_UNDERFLOW (1<<28)
758 /* error and valid are the same for both */
759 #define NV_TX2_ERROR (1<<30) /* logical OR of all errors */
760 #define NV_TX2_VALID (1<<31)
761 #define NV_TX2_TSO (1<<28)
762 #define NV_TX2_TSO_SHIFT 14
763 #define NV_TX2_TSO_MAX_SHIFT 14
764 #define NV_TX2_TSO_MAX_SIZE (1<<NV_TX2_TSO_MAX_SHIFT)
765 #define NV_TX2_CHECKSUM_L3 (1<<27)
766 #define NV_TX2_CHECKSUM_L4 (1<<26)
768 #define NV_TX3_VLAN_TAG_PRESENT (1<<18)
770 #define NV_RX_DESCRIPTORVALID (1<<16)
771 #define NV_RX_MISSEDFRAME (1<<17)
772 #define NV_RX_SUBSTRACT1 (1<<18)
773 #define NV_RX_ERROR1 (1<<23)
774 #define NV_RX_ERROR2 (1<<24)
775 #define NV_RX_ERROR3 (1<<25)
776 #define NV_RX_ERROR4 (1<<26)
777 #define NV_RX_CRCERR (1<<27)
778 #define NV_RX_OVERFLOW (1<<28)
779 #define NV_RX_FRAMINGERR (1<<29)
780 #define NV_RX_ERROR (1<<30) /* logical OR of all errors */
781 #define NV_RX_AVAIL (1<<31)
783 #define NV_RX2_CHECKSUMMASK (0x1C000000)
784 #define NV_RX2_CHECKSUM_IP (0x10000000)
785 #define NV_RX2_CHECKSUM_IP_TCP (0x14000000)
786 #define NV_RX2_CHECKSUM_IP_UDP (0x18000000)
787 #define NV_RX2_DESCRIPTORVALID (1<<29)
788 #define NV_RX2_SUBSTRACT1 (1<<25)
789 #define NV_RX2_ERROR1 (1<<18)
790 #define NV_RX2_ERROR2 (1<<19)
791 #define NV_RX2_ERROR3 (1<<20)
792 #define NV_RX2_ERROR4 (1<<21)
793 #define NV_RX2_CRCERR (1<<22)
794 #define NV_RX2_OVERFLOW (1<<23)
795 #define NV_RX2_FRAMINGERR (1<<24)
796 /* error and avail are the same for both */
797 #define NV_RX2_ERROR (1<<30) /* logical OR of all errors */
798 #define NV_RX2_AVAIL (1<<31)
800 #define NV_RX3_VLAN_TAG_PRESENT (1<<16)
801 #define NV_RX3_VLAN_TAG_MASK (0x0000FFFF)
803 /* Miscelaneous hardware related defines: */
804 #define NV_PCI_REGSZ_VER1 0x270
805 #define NV_PCI_REGSZ_VER2 0x2d4
806 #define NV_PCI_REGSZ_VER3 0x604
808 /* various timeout delays: all in usec */
809 #define NV_TXRX_RESET_DELAY 4
810 #define NV_TXSTOP_DELAY1 10
811 #define NV_TXSTOP_DELAY1MAX 500000
812 #define NV_TXSTOP_DELAY2 100
813 #define NV_RXSTOP_DELAY1 10
814 #define NV_RXSTOP_DELAY1MAX 500000
815 #define NV_RXSTOP_DELAY2 100
816 #define NV_SETUP5_DELAY 5
817 #define NV_SETUP5_DELAYMAX 50000
818 #define NV_POWERUP_DELAY 5
819 #define NV_POWERUP_DELAYMAX 5000
820 #define NV_MIIBUSY_DELAY 50
821 #define NV_MIIPHY_DELAY 10
822 #define NV_MIIPHY_DELAYMAX 10000
823 #define NV_MAC_RESET_DELAY 64
825 #define NV_WAKEUPPATTERNS 5
826 #define NV_WAKEUPMASKENTRIES 4
828 /* General driver defaults */
829 #define NV_WATCHDOG_TIMEO (5*HZ)
831 #define RX_RING_DEFAULT 128
832 #define TX_RING_DEFAULT 64
833 #define RX_RING_MIN RX_RING_DEFAULT
834 #define TX_RING_MIN TX_RING_DEFAULT
835 #define RING_MAX_DESC_VER_1 1024
836 #define RING_MAX_DESC_VER_2_3 16384
837 /*
838 * Difference between the get and put pointers for the tx ring.
839 * This is used to throttle the amount of data outstanding in the
840 * tx ring.
841 */
842 #define TX_LIMIT_DIFFERENCE 1
844 /* rx/tx mac addr + type + vlan + align + slack*/
845 #define NV_RX_HEADERS (64)
846 /* even more slack. */
847 #define NV_RX_ALLOC_PAD (64)
849 /* maximum mtu size */
850 #define NV_PKTLIMIT_1 ETH_DATA_LEN /* hard limit not known */
851 #define NV_PKTLIMIT_2 9100 /* Actual limit according to NVidia: 9202 */
853 #define OOM_REFILL (1+HZ/20)
854 #define POLL_WAIT (1+HZ/100)
855 #define LINK_TIMEOUT (3*HZ)
856 #define STATS_INTERVAL (10*HZ)
858 /*
859 * desc_ver values:
860 * The nic supports three different descriptor types:
861 * - DESC_VER_1: Original
862 * - DESC_VER_2: support for jumbo frames.
863 * - DESC_VER_3: 64-bit format.
864 */
865 #define DESC_VER_1 1
866 #define DESC_VER_2 2
867 #define DESC_VER_3 3
869 /* PHY defines */
870 #define PHY_OUI_MARVELL 0x5043
871 #define PHY_OUI_CICADA 0x03f1
872 #define PHY_OUI_VITESSE 0x01c1
873 #define PHY_OUI_REALTEK 0x0732
874 #define PHYID1_OUI_MASK 0x03ff
875 #define PHYID1_OUI_SHFT 6
876 #define PHYID2_OUI_MASK 0xfc00
877 #define PHYID2_OUI_SHFT 10
878 #define PHYID2_MODEL_MASK 0x03f0
879 #define PHY_MODEL_MARVELL_E3016 0x220
880 #define PHY_MODEL_MARVELL_E1011 0xb0
881 #define PHY_MARVELL_E3016_INITMASK 0x0300
882 #define PHY_CICADA_INIT1 0x0f000
883 #define PHY_CICADA_INIT2 0x0e00
884 #define PHY_CICADA_INIT3 0x01000
885 #define PHY_CICADA_INIT4 0x0200
886 #define PHY_CICADA_INIT5 0x0004
887 #define PHY_CICADA_INIT6 0x02000
888 #define PHY_VITESSE_INIT_REG1 0x1f
889 #define PHY_VITESSE_INIT_REG2 0x10
890 #define PHY_VITESSE_INIT_REG3 0x11
891 #define PHY_VITESSE_INIT_REG4 0x12
892 #define PHY_VITESSE_INIT_MSK1 0xc
893 #define PHY_VITESSE_INIT_MSK2 0x0180
894 #define PHY_VITESSE_INIT1 0x52b5
895 #define PHY_VITESSE_INIT2 0xaf8a
896 #define PHY_VITESSE_INIT3 0x8
897 #define PHY_VITESSE_INIT4 0x8f8a
898 #define PHY_VITESSE_INIT5 0xaf86
899 #define PHY_VITESSE_INIT6 0x8f86
900 #define PHY_VITESSE_INIT7 0xaf82
901 #define PHY_VITESSE_INIT8 0x0100
902 #define PHY_VITESSE_INIT9 0x8f82
903 #define PHY_VITESSE_INIT10 0x0
904 #define PHY_REALTEK_INIT_REG1 0x1f
905 #define PHY_REALTEK_INIT_REG2 0x19
906 #define PHY_REALTEK_INIT_REG3 0x13
907 #define PHY_REALTEK_INIT1 0x0000
908 #define PHY_REALTEK_INIT2 0x8e00
909 #define PHY_REALTEK_INIT3 0x0001
910 #define PHY_REALTEK_INIT4 0xad17
912 #define PHY_GIGABIT 0x0100
914 #define PHY_TIMEOUT 0x1
915 #define PHY_ERROR 0x2
917 #define PHY_100 0x1
918 #define PHY_1000 0x2
919 #define PHY_HALF 0x100
921 #define NV_PAUSEFRAME_RX_CAPABLE 0x0001
922 #define NV_PAUSEFRAME_TX_CAPABLE 0x0002
923 #define NV_PAUSEFRAME_RX_ENABLE 0x0004
924 #define NV_PAUSEFRAME_TX_ENABLE 0x0008
925 #define NV_PAUSEFRAME_RX_REQ 0x0010
926 #define NV_PAUSEFRAME_TX_REQ 0x0020
927 #define NV_PAUSEFRAME_AUTONEG 0x0040
929 /* MSI/MSI-X defines */
930 #define NV_MSI_X_MAX_VECTORS 8
931 #define NV_MSI_X_VECTORS_MASK 0x000f
932 #define NV_MSI_CAPABLE 0x0010
933 #define NV_MSI_X_CAPABLE 0x0020
934 #define NV_MSI_ENABLED 0x0040
935 #define NV_MSI_X_ENABLED 0x0080
937 #define NV_MSI_X_VECTOR_ALL 0x0
938 #define NV_MSI_X_VECTOR_RX 0x0
939 #define NV_MSI_X_VECTOR_TX 0x1
940 #define NV_MSI_X_VECTOR_OTHER 0x2
942 #define NV_RESTART_TX 0x1
943 #define NV_RESTART_RX 0x2
944 #define NVLAN_DISABLE_ALL_FEATURES do { \
945 msi = NV_MSI_INT_DISABLED; \
946 msix = NV_MSIX_INT_DISABLED; \
947 scatter_gather = NV_SCATTER_GATHER_DISABLED; \
948 tso_offload = NV_TSO_DISABLED; \
949 tx_checksum_offload = NV_TX_CHECKSUM_DISABLED; \
950 rx_checksum_offload = NV_RX_CHECKSUM_DISABLED; \
951 tx_flow_control = NV_TX_FLOW_CONTROL_DISABLED; \
952 rx_flow_control = NV_RX_FLOW_CONTROL_DISABLED; \
953 wol = NV_WOL_DISABLED; \
954 tagging_8021pq = NV_8021PQ_DISABLED; \
955 } while (0)
957 struct nv_ethtool_str {
958 char name[ETH_GSTRING_LEN];
959 };
961 static const struct nv_ethtool_str nv_estats_str[] = {
962 { "tx_dropped" },
963 { "tx_fifo_errors" },
964 { "tx_carrier_errors" },
965 { "tx_packets" },
966 { "tx_bytes" },
967 { "rx_crc_errors" },
968 { "rx_over_errors" },
969 { "rx_errors_total" },
970 { "rx_packets" },
971 { "rx_bytes" },
973 /* hardware counters */
974 { "tx_zero_rexmt" },
975 { "tx_one_rexmt" },
976 { "tx_many_rexmt" },
977 { "tx_late_collision" },
978 { "tx_excess_deferral" },
979 { "tx_retry_error" },
980 { "rx_frame_error" },
981 { "rx_extra_byte" },
982 { "rx_late_collision" },
983 { "rx_runt" },
984 { "rx_frame_too_long" },
985 { "rx_frame_align_error" },
986 { "rx_length_error" },
987 { "rx_unicast" },
988 { "rx_multicast" },
989 { "rx_broadcast" },
990 { "tx_deferral" },
991 { "tx_pause" },
992 { "rx_pause" },
993 { "rx_drop_frame" }
994 };
996 struct nv_ethtool_stats {
997 u64 tx_dropped;
998 u64 tx_fifo_errors;
999 u64 tx_carrier_errors;
1000 u64 tx_packets;
1001 u64 tx_bytes;
1002 u64 rx_crc_errors;
1003 u64 rx_over_errors;
1004 u64 rx_errors_total;
1005 u64 rx_packets;
1006 u64 rx_bytes;
1008 /* hardware counters */
1009 u64 tx_zero_rexmt;
1010 u64 tx_one_rexmt;
1011 u64 tx_many_rexmt;
1012 u64 tx_late_collision;
1013 u64 tx_excess_deferral;
1014 u64 tx_retry_error;
1015 u64 rx_frame_error;
1016 u64 rx_extra_byte;
1017 u64 rx_late_collision;
1018 u64 rx_runt;
1019 u64 rx_frame_too_long;
1020 u64 rx_frame_align_error;
1021 u64 rx_length_error;
1022 u64 rx_unicast;
1023 u64 rx_multicast;
1024 u64 rx_broadcast;
1025 u64 tx_deferral;
1026 u64 tx_pause;
1027 u64 rx_pause;
1028 u64 rx_drop_frame;
1029 };
1030 #define NV_DEV_STATISTICS_V2_COUNT (sizeof(struct nv_ethtool_stats)/sizeof(u64))
1031 #define NV_DEV_STATISTICS_V1_COUNT (NV_DEV_STATISTICS_V2_COUNT - 4)
1032 #define NV_DEV_STATISTICS_SW_COUNT 10
1034 /* diagnostics */
1035 #define NV_TEST_COUNT_BASE 3
1036 #define NV_TEST_COUNT_EXTENDED 4
1038 static const struct nv_ethtool_str nv_etests_str[] = {
1039 { "link (online/offline)" },
1040 { "register (offline) " },
1041 { "interrupt (offline) " },
1042 { "loopback (offline) " }
1043 };
1045 struct register_test {
1046 u32 reg;
1047 u32 mask;
1048 };
1050 static const struct register_test nv_registers_test[] = {
1051 { NvRegUnknownSetupReg6, 0x01 },
1052 { NvRegMisc1, 0x03c },
1053 { NvRegOffloadConfig, 0x03ff },
1054 { NvRegMulticastAddrA, 0xffffffff },
1055 { NvRegTxWatermark, 0x0ff },
1056 { NvRegWakeUpFlags, 0x07777 },
1057 { 0,0 }
1058 };
1060 struct nv_skb_map {
1061 struct sk_buff *skb;
1062 dma_addr_t dma;
1063 unsigned int dma_len;
1064 };
1066 /*
1067 * SMP locking:
1068 * All hardware access under dev->priv->lock, except the performance
1069 * critical parts:
1070 * - rx is (pseudo-) lockless: it relies on the single-threading provided
1071 * by the arch code for interrupts.
1072 * - tx setup is lockless: it relies on dev->xmit_lock. Actual submission
1073 * needs dev->priv->lock :-(
1074 * - set_multicast_list: preparation lockless, relies on dev->xmit_lock.
1075 */
1077 /* in dev: base, irq */
1078 struct fe_priv {
1080 /* fields used in fast path are grouped together
1081 for better cache performance
1082 */
1083 spinlock_t lock;
1084 spinlock_t timer_lock;
1085 void __iomem *base;
1086 struct pci_dev *pci_dev;
1087 u32 txrxctl_bits;
1088 int stop_tx;
1089 int need_linktimer;
1090 unsigned long link_timeout;
1091 u32 irqmask;
1092 u32 msi_flags;
1094 unsigned int rx_buf_sz;
1095 struct vlan_group *vlangrp;
1096 int tx_ring_size;
1097 int rx_csum;
1099 /*
1100 * rx specific fields in fast path
1101 */
1102 ring_type get_rx __attribute__((aligned(L1_CACHE_BYTES)));
1103 ring_type put_rx, first_rx, last_rx;
1104 struct nv_skb_map *get_rx_ctx, *put_rx_ctx;
1105 struct nv_skb_map *first_rx_ctx, *last_rx_ctx;
1107 /*
1108 * tx specific fields in fast path
1109 */
1110 ring_type get_tx __attribute__((aligned(L1_CACHE_BYTES)));
1111 ring_type put_tx, first_tx, last_tx;
1112 struct nv_skb_map *get_tx_ctx, *put_tx_ctx;
1113 struct nv_skb_map *first_tx_ctx, *last_tx_ctx;
1115 struct nv_skb_map *rx_skb;
1116 struct nv_skb_map *tx_skb;
1118 /* General data:
1119 * Locking: spin_lock(&np->lock); */
1120 struct net_device_stats stats;
1121 struct nv_ethtool_stats estats;
1122 int in_shutdown;
1123 u32 linkspeed;
1124 int duplex;
1125 int speed_duplex;
1126 int autoneg;
1127 int fixed_mode;
1128 int phyaddr;
1129 int wolenabled;
1130 unsigned int phy_oui;
1131 unsigned int phy_model;
1132 u16 gigabit;
1133 int intr_test;
1134 int recover_error;
1136 /* General data: RO fields */
1137 dma_addr_t ring_addr;
1138 u32 orig_mac[2];
1139 u32 desc_ver;
1140 u32 vlanctl_bits;
1141 u32 driver_data;
1142 u32 register_size;
1143 u32 mac_in_use;
1145 /* rx specific fields.
1146 * Locking: Within irq hander or disable_irq+spin_lock(&np->lock);
1147 */
1148 ring_type rx_ring;
1149 unsigned int pkt_limit;
1150 struct timer_list oom_kick;
1151 struct timer_list nic_poll;
1152 struct timer_list stats_poll;
1153 u32 nic_poll_irq;
1154 int rx_ring_size;
1155 u32 rx_len_errors;
1156 /*
1157 * tx specific fields.
1158 */
1159 ring_type tx_ring;
1160 u32 tx_flags;
1161 int tx_limit_start;
1162 int tx_limit_stop;
1165 /* msi/msi-x fields */
1166 struct msix_entry msi_x_entry[NV_MSI_X_MAX_VECTORS];
1168 /* flow control */
1169 u32 pause_flags;
1170 u32 led_stats[3];
1171 u32 saved_config_space[64];
1172 u32 saved_nvregphyinterface;
1173 #if NVVER < SUSE10
1174 u32 pci_state[16];
1175 #endif
1176 /* msix table */
1177 struct nvmsi_msg nvmsg[NV_MSI_X_MAX_VECTORS];
1178 unsigned long msix_pa_addr;
1179 };
1181 /*
1182 * Maximum number of loops until we assume that a bit in the irq mask
1183 * is stuck. Overridable with module param.
1184 */
1185 static int max_interrupt_work = 5;
1187 /*
1188 * Optimization can be either throuput mode or cpu mode
1190 * Throughput Mode: Every tx and rx packet will generate an interrupt.
1191 * CPU Mode: Interrupts are controlled by a timer.
1192 */
1193 enum {
1194 NV_OPTIMIZATION_MODE_THROUGHPUT,
1195 NV_OPTIMIZATION_MODE_CPU
1196 };
1197 static int optimization_mode = NV_OPTIMIZATION_MODE_THROUGHPUT;
1199 /*
1200 * Poll interval for timer irq
1202 * This interval determines how frequent an interrupt is generated.
1203 * The is value is determined by [(time_in_micro_secs * 100) / (2^10)]
1204 * Min = 0, and Max = 65535
1205 */
1206 static int poll_interval = -1;
1208 /*
1209 * MSI interrupts
1210 */
1211 enum {
1212 NV_MSI_INT_DISABLED,
1213 NV_MSI_INT_ENABLED
1214 };
1216 #ifdef CONFIG_PCI_MSI
1217 static int msi = NV_MSI_INT_ENABLED;
1218 #else
1219 static int msi = NV_MSI_INT_DISABLED;
1220 #endif
1222 /*
1223 * MSIX interrupts
1224 */
1225 enum {
1226 NV_MSIX_INT_DISABLED,
1227 NV_MSIX_INT_ENABLED
1228 };
1230 #ifdef CONFIG_PCI_MSI
1231 static int msix = NV_MSIX_INT_ENABLED;
1232 #else
1233 static int msix = NV_MSIX_INT_DISABLED;
1234 #endif
1235 /*
1236 * PHY Speed and Duplex
1237 */
1238 enum {
1239 NV_SPEED_DUPLEX_AUTO,
1240 NV_SPEED_DUPLEX_10_HALF_DUPLEX,
1241 NV_SPEED_DUPLEX_10_FULL_DUPLEX,
1242 NV_SPEED_DUPLEX_100_HALF_DUPLEX,
1243 NV_SPEED_DUPLEX_100_FULL_DUPLEX,
1244 NV_SPEED_DUPLEX_1000_FULL_DUPLEX
1245 };
1246 static int speed_duplex = NV_SPEED_DUPLEX_AUTO;
1248 /*
1249 * PHY autonegotiation
1250 */
1251 static int autoneg = AUTONEG_ENABLE;
1253 /*
1254 * Scatter gather
1255 */
1256 enum {
1257 NV_SCATTER_GATHER_DISABLED,
1258 NV_SCATTER_GATHER_ENABLED
1259 };
1260 static int scatter_gather = NV_SCATTER_GATHER_ENABLED;
1262 /*
1263 * TCP Segmentation Offload (TSO)
1264 */
1265 enum {
1266 NV_TSO_DISABLED,
1267 NV_TSO_ENABLED
1268 };
1269 static int tso_offload = NV_TSO_ENABLED;
1271 /*
1272 * MTU settings
1273 */
1274 static int mtu = ETH_DATA_LEN;
1276 /*
1277 * Tx checksum offload
1278 */
1279 enum {
1280 NV_TX_CHECKSUM_DISABLED,
1281 NV_TX_CHECKSUM_ENABLED
1282 };
1283 static int tx_checksum_offload = NV_TX_CHECKSUM_ENABLED;
1285 /*
1286 * Rx checksum offload
1287 */
1288 enum {
1289 NV_RX_CHECKSUM_DISABLED,
1290 NV_RX_CHECKSUM_ENABLED
1291 };
1292 static int rx_checksum_offload = NV_RX_CHECKSUM_ENABLED;
1294 /*
1295 * Tx ring size
1296 */
1297 static int tx_ring_size = TX_RING_DEFAULT;
1299 /*
1300 * Rx ring size
1301 */
1302 static int rx_ring_size = RX_RING_DEFAULT;
1304 /*
1305 * Tx flow control
1306 */
1307 enum {
1308 NV_TX_FLOW_CONTROL_DISABLED,
1309 NV_TX_FLOW_CONTROL_ENABLED
1310 };
1311 static int tx_flow_control = NV_TX_FLOW_CONTROL_ENABLED;
1313 /*
1314 * Rx flow control
1315 */
1316 enum {
1317 NV_RX_FLOW_CONTROL_DISABLED,
1318 NV_RX_FLOW_CONTROL_ENABLED
1319 };
1320 static int rx_flow_control = NV_RX_FLOW_CONTROL_ENABLED;
1322 /*
1323 * DMA 64bit
1324 */
1325 enum {
1326 NV_DMA_64BIT_DISABLED,
1327 NV_DMA_64BIT_ENABLED
1328 };
1329 static int dma_64bit = NV_DMA_64BIT_ENABLED;
1331 /*
1332 * Wake On Lan
1333 */
1334 enum {
1335 NV_WOL_DISABLED,
1336 NV_WOL_ENABLED
1337 };
1338 static int wol = NV_WOL_DISABLED;
1340 /*
1341 * Tagging 802.1pq
1342 */
1343 enum {
1344 NV_8021PQ_DISABLED,
1345 NV_8021PQ_ENABLED
1346 };
1347 static int tagging_8021pq = NV_8021PQ_ENABLED;
1349 enum {
1350 NV_LOW_POWER_DISABLED,
1351 NV_LOW_POWER_ENABLED
1352 };
1353 static int lowpowerspeed = NV_LOW_POWER_ENABLED;
1355 static int debug = 0;
1357 #if NVVER < RHES4
1358 static inline unsigned long nv_msecs_to_jiffies(const unsigned int m)
1360 #if HZ <= 1000 && !(1000 % HZ)
1361 return (m + (1000 / HZ) - 1) / (1000 / HZ);
1362 #elif HZ > 1000 && !(HZ % 1000)
1363 return m * (HZ / 1000);
1364 #else
1365 return (m * HZ + 999) / 1000;
1366 #endif
1368 #endif
1370 static void nv_msleep(unsigned int msecs)
1372 #if NVVER > SLES9
1373 msleep(msecs);
1374 #else
1375 unsigned long timeout = nv_msecs_to_jiffies(msecs);
1377 while (timeout) {
1378 set_current_state(TASK_UNINTERRUPTIBLE);
1379 timeout = schedule_timeout(timeout);
1381 #endif
1384 static inline struct fe_priv *get_nvpriv(struct net_device *dev)
1386 #if NVVER > RHES3
1387 return netdev_priv(dev);
1388 #else
1389 return (struct fe_priv *) dev->priv;
1390 #endif
1393 static void __init quirk_nforce_network_class(struct pci_dev *pdev)
1395 /* Some implementations of the nVidia network controllers
1396 * show up as bridges, when we need to see them as network
1397 * devices.
1398 */
1400 /* If this is already known as a network ctlr, do nothing. */
1401 if ((pdev->class >> 8) == PCI_CLASS_NETWORK_ETHERNET)
1402 return;
1404 if ((pdev->class >> 8) == PCI_CLASS_BRIDGE_OTHER) {
1405 char c;
1407 /* Clearing bit 6 of the register at 0xf8
1408 * selects Ethernet device class
1409 */
1410 pci_read_config_byte(pdev, 0xf8, &c);
1411 c &= 0xbf;
1412 pci_write_config_byte(pdev, 0xf8, c);
1414 /* sysfs needs pdev->class to be set correctly */
1415 pdev->class &= 0x0000ff;
1416 pdev->class |= (PCI_CLASS_NETWORK_ETHERNET << 8);
1420 static inline u8 __iomem *get_hwbase(struct net_device *dev)
1422 return ((struct fe_priv *)get_nvpriv(dev))->base;
1425 static inline void pci_push(u8 __iomem *base)
1427 /* force out pending posted writes */
1428 readl(base);
1431 static inline u32 nv_descr_getlength(struct ring_desc *prd, u32 v)
1433 return le32_to_cpu(prd->FlagLen)
1434 & ((v == DESC_VER_1) ? LEN_MASK_V1 : LEN_MASK_V2);
1437 static inline u32 nv_descr_getlength_ex(struct ring_desc_ex *prd, u32 v)
1439 return le32_to_cpu(prd->FlagLen) & LEN_MASK_V2;
1442 static int reg_delay(struct net_device *dev, int offset, u32 mask, u32 target,
1443 int delay, int delaymax, const char *msg)
1445 u8 __iomem *base = get_hwbase(dev);
1447 pci_push(base);
1448 do {
1449 udelay(delay);
1450 delaymax -= delay;
1451 if (delaymax < 0) {
1452 if (msg)
1453 printk(msg);
1454 return 1;
1456 } while ((readl(base + offset) & mask) != target);
1457 return 0;
1460 #define NV_SETUP_RX_RING 0x01
1461 #define NV_SETUP_TX_RING 0x02
1463 static void setup_hw_rings(struct net_device *dev, int rxtx_flags)
1465 struct fe_priv *np = get_nvpriv(dev);
1466 u8 __iomem *base = get_hwbase(dev);
1468 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
1469 if (rxtx_flags & NV_SETUP_RX_RING) {
1470 writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr);
1472 if (rxtx_flags & NV_SETUP_TX_RING) {
1473 writel((u32) cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc)), base + NvRegTxRingPhysAddr);
1475 } else {
1476 if (rxtx_flags & NV_SETUP_RX_RING) {
1477 writel((u32) cpu_to_le64(np->ring_addr), base + NvRegRxRingPhysAddr);
1478 writel((u32) (cpu_to_le64(np->ring_addr) >> 32), base + NvRegRxRingPhysAddrHigh);
1480 if (rxtx_flags & NV_SETUP_TX_RING) {
1481 writel((u32) cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc_ex)), base + NvRegTxRingPhysAddr);
1482 writel((u32) (cpu_to_le64(np->ring_addr + np->rx_ring_size*sizeof(struct ring_desc_ex)) >> 32), base + NvRegTxRingPhysAddrHigh);
1487 static void free_rings(struct net_device *dev)
1489 struct fe_priv *np = get_nvpriv(dev);
1491 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
1492 if(np->rx_ring.orig)
1493 pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (np->rx_ring_size + np->tx_ring_size),
1494 np->rx_ring.orig, np->ring_addr);
1495 } else {
1496 if (np->rx_ring.ex)
1497 pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (np->rx_ring_size + np->tx_ring_size),
1498 np->rx_ring.ex, np->ring_addr);
1500 if (np->rx_skb)
1501 kfree(np->rx_skb);
1502 if (np->tx_skb)
1503 kfree(np->tx_skb);
1506 static int using_multi_irqs(struct net_device *dev)
1508 struct fe_priv *np = get_nvpriv(dev);
1510 if (!(np->msi_flags & NV_MSI_X_ENABLED) ||
1511 ((np->msi_flags & NV_MSI_X_ENABLED) &&
1512 ((np->msi_flags & NV_MSI_X_VECTORS_MASK) == 0x1)))
1513 return 0;
1514 else
1515 return 1;
1518 static void nv_enable_irq(struct net_device *dev)
1520 struct fe_priv *np = get_nvpriv(dev);
1522 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
1523 /* modify network device class id */
1524 if (!using_multi_irqs(dev)) {
1525 if (np->msi_flags & NV_MSI_X_ENABLED)
1526 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
1527 else
1528 enable_irq(np->pci_dev->irq);
1529 } else {
1530 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
1531 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
1532 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
1536 static void nv_disable_irq(struct net_device *dev)
1538 struct fe_priv *np = get_nvpriv(dev);
1540 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
1541 if (!using_multi_irqs(dev)) {
1542 if (np->msi_flags & NV_MSI_X_ENABLED)
1543 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
1544 else
1545 disable_irq(np->pci_dev->irq);
1546 } else {
1547 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
1548 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
1549 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
1553 /* In MSIX mode, a write to irqmask behaves as XOR */
1554 static void nv_enable_hw_interrupts(struct net_device *dev, u32 mask)
1556 u8 __iomem *base = get_hwbase(dev);
1557 struct fe_priv *np = get_nvpriv(dev);
1559 writel(mask, base + NvRegIrqMask);
1560 if (np->msi_flags & NV_MSI_ENABLED)
1561 writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask);
1564 static void nv_disable_hw_interrupts(struct net_device *dev, u32 mask)
1566 struct fe_priv *np = get_nvpriv(dev);
1567 u8 __iomem *base = get_hwbase(dev);
1569 if (np->msi_flags & NV_MSI_X_ENABLED) {
1570 writel(mask, base + NvRegIrqMask);
1571 } else {
1572 if (np->msi_flags & NV_MSI_ENABLED)
1573 writel(0, base + NvRegMSIIrqMask);
1574 writel(0, base + NvRegIrqMask);
1578 #define MII_READ (-1)
1579 /* mii_rw: read/write a register on the PHY.
1581 * Caller must guarantee serialization
1582 */
1583 static int mii_rw(struct net_device *dev, int addr, int miireg, int value)
1585 u8 __iomem *base = get_hwbase(dev);
1586 u32 reg;
1587 int retval;
1589 writel(NVREG_MIISTAT_MASK_RW, base + NvRegMIIStatus);
1591 reg = readl(base + NvRegMIIControl);
1592 if (reg & NVREG_MIICTL_INUSE) {
1593 writel(NVREG_MIICTL_INUSE, base + NvRegMIIControl);
1594 udelay(NV_MIIBUSY_DELAY);
1597 reg = (addr << NVREG_MIICTL_ADDRSHIFT) | miireg;
1598 if (value != MII_READ) {
1599 writel(value, base + NvRegMIIData);
1600 reg |= NVREG_MIICTL_WRITE;
1602 writel(reg, base + NvRegMIIControl);
1604 if (reg_delay(dev, NvRegMIIControl, NVREG_MIICTL_INUSE, 0,
1605 NV_MIIPHY_DELAY, NV_MIIPHY_DELAYMAX, NULL)) {
1606 dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d timed out.\n",
1607 dev->name, miireg, addr);
1608 retval = -1;
1609 } else if (value != MII_READ) {
1610 /* it was a write operation - fewer failures are detectable */
1611 dprintk(KERN_DEBUG "%s: mii_rw wrote 0x%x to reg %d at PHY %d\n",
1612 dev->name, value, miireg, addr);
1613 retval = 0;
1614 } else if (readl(base + NvRegMIIStatus) & NVREG_MIISTAT_ERROR) {
1615 dprintk(KERN_DEBUG "%s: mii_rw of reg %d at PHY %d failed.\n",
1616 dev->name, miireg, addr);
1617 retval = -1;
1618 } else {
1619 retval = readl(base + NvRegMIIData);
1620 dprintk(KERN_DEBUG "%s: mii_rw read from reg %d at PHY %d: 0x%x.\n",
1621 dev->name, miireg, addr, retval);
1624 return retval;
1627 static void nv_save_LED_stats(struct net_device *dev)
1629 struct fe_priv *np = get_nvpriv(dev);
1630 u32 reg=0;
1631 u32 value=0;
1632 int i=0;
1634 reg = Mv_Page_Address;
1635 value = 3;
1636 mii_rw(dev,np->phyaddr,reg,value);
1637 udelay(5);
1639 reg = Mv_LED_Control;
1640 for(i=0;i<3;i++){
1641 np->led_stats[i]=mii_rw(dev,np->phyaddr,reg+i,MII_READ);
1642 dprintk(KERN_DEBUG "%s: save LED reg%d: value=0x%x\n",dev->name,reg+i,np->led_stats[i]);
1647 static void nv_restore_LED_stats(struct net_device *dev)
1650 struct fe_priv *np = get_nvpriv(dev);
1651 u32 reg=0;
1652 u32 value=0;
1653 int i=0;
1655 reg = Mv_Page_Address;
1656 value = 3;
1657 mii_rw(dev,np->phyaddr,reg,value);
1658 udelay(5);
1660 reg = Mv_LED_Control;
1661 for(i=0;i<3;i++){
1662 mii_rw(dev,np->phyaddr,reg+i,np->led_stats[i]);
1663 udelay(1);
1664 dprintk(KERN_DEBUG "%s: restore LED reg%d: value=0x%x\n",dev->name,reg+i,np->led_stats[i]);
1669 static void nv_LED_on(struct net_device *dev)
1671 struct fe_priv *np = get_nvpriv(dev);
1672 u32 reg=0;
1673 u32 value=0;
1675 reg = Mv_Page_Address;
1676 value = 3;
1677 mii_rw(dev,np->phyaddr,reg,value);
1678 udelay(5);
1680 reg = Mv_LED_Control;
1681 mii_rw(dev,np->phyaddr,reg,Mv_LED_DUAL_MODE3);
1685 static void nv_LED_off(struct net_device *dev)
1687 struct fe_priv *np = get_nvpriv(dev);
1688 u32 reg=0;
1689 u32 value=0;
1691 reg = Mv_Page_Address;
1692 value = 3;
1693 mii_rw(dev,np->phyaddr,reg,value);
1694 udelay(5);
1696 reg = Mv_LED_Control;
1697 mii_rw(dev,np->phyaddr,reg,Mv_LED_FORCE_OFF);
1698 udelay(1);
1702 static int phy_reset(struct net_device *dev, u32 bmcr_setup)
1704 struct fe_priv *np = get_nvpriv(dev);
1705 u32 miicontrol;
1706 unsigned int tries = 0;
1708 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
1709 if (np->phy_oui== PHY_OUI_MARVELL && np->phy_model == PHY_MODEL_MARVELL_E1011) {
1710 nv_save_LED_stats(dev);
1712 miicontrol = BMCR_RESET | bmcr_setup;
1713 if (mii_rw(dev, np->phyaddr, MII_BMCR, miicontrol)) {
1714 return -1;
1717 /* wait for 500ms */
1718 nv_msleep(500);
1720 /* must wait till reset is deasserted */
1721 while (miicontrol & BMCR_RESET) {
1722 nv_msleep(10);
1723 miicontrol = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
1724 /* FIXME: 100 tries seem excessive */
1725 if (tries++ > 100)
1726 return -1;
1728 if (np->phy_oui== PHY_OUI_MARVELL && np->phy_model == PHY_MODEL_MARVELL_E1011) {
1729 nv_restore_LED_stats(dev);
1732 return 0;
1735 static int phy_init(struct net_device *dev)
1737 struct fe_priv *np = get_nvpriv(dev);
1738 u8 __iomem *base = get_hwbase(dev);
1739 u32 phyinterface, phy_reserved, mii_status, mii_control, mii_control_1000,reg;
1741 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
1742 /* phy errata for E3016 phy */
1743 if (np->phy_model == PHY_MODEL_MARVELL_E3016) {
1744 reg = mii_rw(dev, np->phyaddr, MII_NCONFIG, MII_READ);
1745 reg &= ~PHY_MARVELL_E3016_INITMASK;
1746 if (mii_rw(dev, np->phyaddr, MII_NCONFIG, reg)) {
1747 printk(KERN_INFO "%s: phy write to errata reg failed.\n", pci_name(np->pci_dev));
1748 return PHY_ERROR;
1752 if (np->phy_oui == PHY_OUI_REALTEK) {
1753 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) {
1754 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1755 return PHY_ERROR;
1757 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG2, PHY_REALTEK_INIT2)) {
1758 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1759 return PHY_ERROR;
1761 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT3)) {
1762 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1763 return PHY_ERROR;
1765 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG3, PHY_REALTEK_INIT4)) {
1766 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1767 return PHY_ERROR;
1769 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) {
1770 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1771 return PHY_ERROR;
1775 /* set advertise register */
1776 reg = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
1777 reg &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
1778 if (np->speed_duplex == NV_SPEED_DUPLEX_AUTO)
1779 reg |= (ADVERTISE_10HALF|ADVERTISE_10FULL|ADVERTISE_100HALF|ADVERTISE_100FULL);
1780 if (np->speed_duplex == NV_SPEED_DUPLEX_10_HALF_DUPLEX)
1781 reg |= ADVERTISE_10HALF;
1782 if (np->speed_duplex == NV_SPEED_DUPLEX_10_FULL_DUPLEX)
1783 reg |= ADVERTISE_10FULL;
1784 if (np->speed_duplex == NV_SPEED_DUPLEX_100_HALF_DUPLEX)
1785 reg |= ADVERTISE_100HALF;
1786 if (np->speed_duplex == NV_SPEED_DUPLEX_100_FULL_DUPLEX)
1787 reg |= ADVERTISE_100FULL;
1788 if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) /* for rx we set both advertisments but disable tx pause */
1789 reg |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
1790 if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
1791 reg |= ADVERTISE_PAUSE_ASYM;
1792 np->fixed_mode = reg;
1794 if (mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg)) {
1795 printk(KERN_INFO "%s: phy write to advertise failed.\n", pci_name(np->pci_dev));
1796 return PHY_ERROR;
1799 /* get phy interface type */
1800 phyinterface = readl(base + NvRegPhyInterface);
1802 /* see if gigabit phy */
1803 mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
1804 if (mii_status & PHY_GIGABIT) {
1805 np->gigabit = PHY_GIGABIT;
1806 mii_control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
1807 mii_control_1000 &= ~ADVERTISE_1000HALF;
1808 if (phyinterface & PHY_RGMII &&
1809 (np->speed_duplex == NV_SPEED_DUPLEX_AUTO ||
1810 (np->speed_duplex == NV_SPEED_DUPLEX_1000_FULL_DUPLEX && np->autoneg == AUTONEG_ENABLE)))
1811 mii_control_1000 |= ADVERTISE_1000FULL;
1812 else {
1813 if (np->speed_duplex == NV_SPEED_DUPLEX_1000_FULL_DUPLEX && np->autoneg == AUTONEG_DISABLE)
1814 printk(KERN_INFO "%s: 1000mpbs full only allowed with autoneg\n", pci_name(np->pci_dev));
1815 mii_control_1000 &= ~ADVERTISE_1000FULL;
1817 if (mii_rw(dev, np->phyaddr, MII_CTRL1000, mii_control_1000)) {
1818 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1819 return PHY_ERROR;
1822 else
1823 np->gigabit = 0;
1825 mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
1826 if (np->autoneg == AUTONEG_DISABLE){
1827 np->pause_flags &= ~(NV_PAUSEFRAME_RX_ENABLE | NV_PAUSEFRAME_TX_ENABLE);
1828 if (np->pause_flags & NV_PAUSEFRAME_RX_REQ)
1829 np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
1830 if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
1831 np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
1832 mii_control &= ~(BMCR_ANENABLE|BMCR_SPEED100|BMCR_SPEED1000|BMCR_FULLDPLX);
1833 if (reg & (ADVERTISE_10FULL|ADVERTISE_100FULL))
1834 mii_control |= BMCR_FULLDPLX;
1835 if (reg & (ADVERTISE_100HALF|ADVERTISE_100FULL))
1836 mii_control |= BMCR_SPEED100;
1837 } else {
1838 mii_control |= BMCR_ANENABLE;
1841 /* reset the phy and setup BMCR
1842 * (certain phys need reset at same time new values are set) */
1843 if (phy_reset(dev, mii_control)) {
1844 printk(KERN_INFO "%s: phy reset failed\n", pci_name(np->pci_dev));
1845 return PHY_ERROR;
1848 /* phy vendor specific configuration */
1849 if ((np->phy_oui == PHY_OUI_CICADA) && (phyinterface & PHY_RGMII) ) {
1850 phy_reserved = mii_rw(dev, np->phyaddr, MII_RESV1, MII_READ);
1851 phy_reserved &= ~(PHY_CICADA_INIT1 | PHY_CICADA_INIT2);
1852 phy_reserved |= (PHY_CICADA_INIT3 | PHY_CICADA_INIT4);
1853 if (mii_rw(dev, np->phyaddr, MII_RESV1, phy_reserved)) {
1854 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1855 return PHY_ERROR;
1857 phy_reserved = mii_rw(dev, np->phyaddr, MII_NCONFIG, MII_READ);
1858 phy_reserved |= PHY_CICADA_INIT5;
1859 if (mii_rw(dev, np->phyaddr, MII_NCONFIG, phy_reserved)) {
1860 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1861 return PHY_ERROR;
1864 if (np->phy_oui == PHY_OUI_CICADA) {
1865 phy_reserved = mii_rw(dev, np->phyaddr, MII_SREVISION, MII_READ);
1866 phy_reserved |= PHY_CICADA_INIT6;
1867 if (mii_rw(dev, np->phyaddr, MII_SREVISION, phy_reserved)) {
1868 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1869 return PHY_ERROR;
1872 if (np->phy_oui == PHY_OUI_VITESSE) {
1873 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG1, PHY_VITESSE_INIT1)) {
1874 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1875 return PHY_ERROR;
1877 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT2)) {
1878 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1879 return PHY_ERROR;
1881 phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, MII_READ);
1882 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, phy_reserved)) {
1883 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1884 return PHY_ERROR;
1886 phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, MII_READ);
1887 phy_reserved &= ~PHY_VITESSE_INIT_MSK1;
1888 phy_reserved |= PHY_VITESSE_INIT3;
1889 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, phy_reserved)) {
1890 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1891 return PHY_ERROR;
1893 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT4)) {
1894 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1895 return PHY_ERROR;
1897 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT5)) {
1898 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1899 return PHY_ERROR;
1901 phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, MII_READ);
1902 phy_reserved &= ~PHY_VITESSE_INIT_MSK1;
1903 phy_reserved |= PHY_VITESSE_INIT3;
1904 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, phy_reserved)) {
1905 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1906 return PHY_ERROR;
1908 phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, MII_READ);
1909 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, phy_reserved)) {
1910 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1911 return PHY_ERROR;
1913 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT6)) {
1914 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1915 return PHY_ERROR;
1917 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT7)) {
1918 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1919 return PHY_ERROR;
1921 phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, MII_READ);
1922 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG4, phy_reserved)) {
1923 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1924 return PHY_ERROR;
1926 phy_reserved = mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, MII_READ);
1927 phy_reserved &= ~PHY_VITESSE_INIT_MSK2;
1928 phy_reserved |= PHY_VITESSE_INIT8;
1929 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG3, phy_reserved)) {
1930 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1931 return PHY_ERROR;
1933 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG2, PHY_VITESSE_INIT9)) {
1934 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1935 return PHY_ERROR;
1937 if (mii_rw(dev, np->phyaddr, PHY_VITESSE_INIT_REG1, PHY_VITESSE_INIT10)) {
1938 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1939 return PHY_ERROR;
1942 if (np->phy_oui == PHY_OUI_REALTEK) {
1943 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) {
1944 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1945 return PHY_ERROR;
1947 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG2, PHY_REALTEK_INIT2)) {
1948 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1949 return PHY_ERROR;
1951 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT3)) {
1952 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1953 return PHY_ERROR;
1955 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG3, PHY_REALTEK_INIT4)) {
1956 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1957 return PHY_ERROR;
1959 if (mii_rw(dev, np->phyaddr, PHY_REALTEK_INIT_REG1, PHY_REALTEK_INIT1)) {
1960 printk(KERN_INFO "%s: phy init failed.\n", pci_name(np->pci_dev));
1961 return PHY_ERROR;
1964 /* some phys clear out pause advertisment on reset, set it back */
1965 mii_rw(dev, np->phyaddr, MII_ADVERTISE, reg);
1967 /* restart auto negotiation */
1968 if (np->autoneg == AUTONEG_ENABLE) {
1969 mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
1970 mii_control |= (BMCR_ANRESTART | BMCR_ANENABLE);
1971 if (mii_rw(dev, np->phyaddr, MII_BMCR, mii_control)) {
1972 return PHY_ERROR;
1976 return 0;
1979 static void nv_start_rx(struct net_device *dev)
1981 struct fe_priv *np = get_nvpriv(dev);
1982 u8 __iomem *base = get_hwbase(dev);
1983 u32 rx_ctrl = readl(base + NvRegReceiverControl);
1985 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
1987 /* Already running? Stop it. */
1988 if ((readl(base + NvRegReceiverControl) & NVREG_RCVCTL_START) && !np->mac_in_use) {
1989 rx_ctrl &= ~NVREG_RCVCTL_START;
1990 writel(rx_ctrl, base + NvRegReceiverControl);
1991 pci_push(base);
1993 writel(np->linkspeed, base + NvRegLinkSpeed);
1994 pci_push(base);
1995 rx_ctrl |= NVREG_RCVCTL_START;
1996 if (np->mac_in_use)
1997 rx_ctrl &= ~NVREG_RCVCTL_RX_PATH_EN;
1998 writel(rx_ctrl, base + NvRegReceiverControl);
1999 dprintk(KERN_DEBUG "%s: nv_start_rx to duplex %d, speed 0x%08x.\n",
2000 dev->name, np->duplex, np->linkspeed);
2001 pci_push(base);
2004 static void nv_stop_rx(struct net_device *dev)
2006 struct fe_priv *np = get_nvpriv(dev);
2007 u8 __iomem *base = get_hwbase(dev);
2008 u32 rx_ctrl = readl(base + NvRegReceiverControl);
2010 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
2011 if (!np->mac_in_use)
2012 rx_ctrl &= ~NVREG_RCVCTL_START;
2013 else
2014 rx_ctrl |= NVREG_RCVCTL_RX_PATH_EN;
2015 writel(rx_ctrl, base + NvRegReceiverControl);
2016 reg_delay(dev, NvRegReceiverStatus, NVREG_RCVSTAT_BUSY, 0,
2017 NV_RXSTOP_DELAY1, NV_RXSTOP_DELAY1MAX,
2018 KERN_INFO "nv_stop_rx: ReceiverStatus remained busy");
2020 udelay(NV_RXSTOP_DELAY2);
2021 if (!np->mac_in_use)
2022 writel(0, base + NvRegLinkSpeed);
2025 static void nv_start_tx(struct net_device *dev)
2027 struct fe_priv *np = get_nvpriv(dev);
2028 u8 __iomem *base = get_hwbase(dev);
2029 u32 tx_ctrl = readl(base + NvRegTransmitterControl);
2031 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
2032 tx_ctrl |= NVREG_XMITCTL_START;
2033 if (np->mac_in_use)
2034 tx_ctrl &= ~NVREG_XMITCTL_TX_PATH_EN;
2035 writel(tx_ctrl, base + NvRegTransmitterControl);
2036 pci_push(base);
2039 static void nv_stop_tx(struct net_device *dev)
2041 struct fe_priv *np = get_nvpriv(dev);
2042 u8 __iomem *base = get_hwbase(dev);
2043 u32 tx_ctrl = readl(base + NvRegTransmitterControl);
2045 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
2046 if (!np->mac_in_use)
2047 tx_ctrl &= ~NVREG_XMITCTL_START;
2048 else
2049 tx_ctrl |= NVREG_XMITCTL_TX_PATH_EN;
2050 writel(tx_ctrl, base + NvRegTransmitterControl);
2051 reg_delay(dev, NvRegTransmitterStatus, NVREG_XMITSTAT_BUSY, 0,
2052 NV_TXSTOP_DELAY1, NV_TXSTOP_DELAY1MAX,
2053 KERN_INFO "nv_stop_tx: TransmitterStatus remained busy");
2055 udelay(NV_TXSTOP_DELAY2);
2056 if (!np->mac_in_use)
2057 writel(readl(base + NvRegTransmitPoll) & NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll);
2060 static void nv_txrx_reset(struct net_device *dev)
2062 struct fe_priv *np = get_nvpriv(dev);
2063 u8 __iomem *base = get_hwbase(dev);
2064 unsigned int i;
2066 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
2067 writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl);
2068 for(i=0;i<10000;i++){
2069 udelay(1);
2070 if(readl(base+NvRegTxRxControl) & NVREG_TXRXCTL_IDLE)
2071 break;
2073 writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl);
2074 pci_push(base);
2075 udelay(NV_TXRX_RESET_DELAY);
2076 pci_push(base);
2079 static void nv_mac_reset(struct net_device *dev)
2081 struct fe_priv *np = get_nvpriv(dev);
2082 u8 __iomem *base = get_hwbase(dev);
2083 u32 temp1,temp2,temp3;
2085 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
2086 writel(NVREG_TXRXCTL_BIT2 | NVREG_TXRXCTL_RESET | np->txrxctl_bits, base + NvRegTxRxControl);
2088 /* save registers since they will be cleared on reset */
2089 temp1 = readl(base + NvRegMacAddrA);
2090 temp2 = readl(base + NvRegMacAddrB);
2091 temp3 = readl(base + NvRegTransmitPoll);
2093 pci_push(base);
2094 writel(NVREG_MAC_RESET_ASSERT, base + NvRegMacReset);
2095 pci_push(base);
2096 udelay(NV_MAC_RESET_DELAY);
2097 writel(0, base + NvRegMacReset);
2098 pci_push(base);
2099 udelay(NV_MAC_RESET_DELAY);
2101 /* restore saved registers */
2102 writel(temp1, base + NvRegMacAddrA);
2103 writel(temp2, base + NvRegMacAddrB);
2104 writel(temp3, base + NvRegTransmitPoll);
2106 writel(NVREG_TXRXCTL_BIT2 | np->txrxctl_bits, base + NvRegTxRxControl);
2107 pci_push(base);
2110 #if NVVER < SLES9
2111 static int nv_ethtool_ioctl(struct net_device *dev, void *useraddr)
2113 struct fe_priv *np = get_nvpriv(dev);
2114 u8 *base = get_hwbase(dev);
2115 u32 ethcmd;
2117 if (copy_from_user(&ethcmd, useraddr, sizeof (ethcmd)))
2118 return -EFAULT;
2120 switch (ethcmd) {
2121 case ETHTOOL_GDRVINFO:
2123 struct ethtool_drvinfo info = { ETHTOOL_GDRVINFO };
2124 strcpy(info.driver, "forcedeth");
2125 strcpy(info.version, FORCEDETH_VERSION);
2126 strcpy(info.bus_info, pci_name(np->pci_dev));
2127 if (copy_to_user(useraddr, &info, sizeof (info)))
2128 return -EFAULT;
2129 return 0;
2131 case ETHTOOL_GLINK:
2133 struct ethtool_value edata = { ETHTOOL_GLINK };
2135 edata.data = !!netif_carrier_ok(dev);
2137 if (copy_to_user(useraddr, &edata, sizeof(edata)))
2138 return -EFAULT;
2139 return 0;
2141 case ETHTOOL_GWOL:
2143 struct ethtool_wolinfo wolinfo;
2144 memset(&wolinfo, 0, sizeof(wolinfo));
2145 wolinfo.supported = WAKE_MAGIC;
2147 spin_lock_irq(&np->lock);
2148 if (np->wolenabled)
2149 wolinfo.wolopts = WAKE_MAGIC;
2150 spin_unlock_irq(&np->lock);
2152 if (copy_to_user(useraddr, &wolinfo, sizeof(wolinfo)))
2153 return -EFAULT;
2154 return 0;
2156 case ETHTOOL_SWOL:
2158 struct ethtool_wolinfo wolinfo;
2159 if (copy_from_user(&wolinfo, useraddr, sizeof(wolinfo)))
2160 return -EFAULT;
2162 spin_lock_irq(&np->lock);
2163 if (wolinfo.wolopts == 0) {
2164 writel(0, base + NvRegWakeUpFlags);
2165 np->wolenabled = NV_WOL_DISABLED;
2167 if (wolinfo.wolopts & WAKE_MAGIC) {
2168 writel(NVREG_WAKEUPFLAGS_ENABLE, base + NvRegWakeUpFlags);
2169 np->wolenabled = NV_WOL_ENABLED;
2171 spin_unlock_irq(&np->lock);
2172 return 0;
2175 default:
2176 break;
2179 return -EOPNOTSUPP;
2182 /*
2183 * nv_ioctl: dev->do_ioctl function
2184 * Called with rtnl_lock held.
2185 */
2186 static int nv_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
2188 switch(cmd) {
2189 case SIOCETHTOOL:
2190 return nv_ethtool_ioctl(dev, rq->ifr_data);
2192 default:
2193 return -EOPNOTSUPP;
2196 #endif
2198 /*
2199 * nv_alloc_rx: fill rx ring entries.
2200 * Return 1 if the allocations for the skbs failed and the
2201 * rx engine is without Available descriptors
2202 */
2203 static inline int nv_alloc_rx(struct net_device *dev)
2205 struct fe_priv *np = get_nvpriv(dev);
2206 struct ring_desc* less_rx;
2207 struct sk_buff *skb;
2209 less_rx = np->get_rx.orig;
2210 if (less_rx-- == np->first_rx.orig)
2211 less_rx = np->last_rx.orig;
2213 while (np->put_rx.orig != less_rx) {
2214 skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD);
2215 if (skb) {
2216 skb->dev = dev;
2217 np->put_rx_ctx->skb = skb;
2218 #if NVVER > FEDORA7
2219 np->put_rx_ctx->dma = pci_map_single(np->pci_dev, skb->data,
2220 skb_tailroom(skb), PCI_DMA_FROMDEVICE);
2221 np->put_rx_ctx->dma_len = skb_tailroom(skb);
2222 #else
2223 np->put_rx_ctx->dma = pci_map_single(np->pci_dev, skb->data,
2224 skb->end-skb->data, PCI_DMA_FROMDEVICE);
2225 np->put_rx_ctx->dma_len = skb->end-skb->data;
2226 #endif
2227 np->put_rx.orig->PacketBuffer = cpu_to_le32(np->put_rx_ctx->dma);
2228 wmb();
2229 np->put_rx.orig->FlagLen = cpu_to_le32(np->rx_buf_sz | NV_RX_AVAIL);
2230 if (unlikely(np->put_rx.orig++ == np->last_rx.orig))
2231 np->put_rx.orig = np->first_rx.orig;
2232 if (unlikely(np->put_rx_ctx++ == np->last_rx_ctx))
2233 np->put_rx_ctx = np->first_rx_ctx;
2234 } else {
2235 return 1;
2238 return 0;
2241 static inline int nv_alloc_rx_optimized(struct net_device *dev)
2243 struct fe_priv *np = get_nvpriv(dev);
2244 struct ring_desc_ex* less_rx;
2245 struct sk_buff *skb;
2247 less_rx = np->get_rx.ex;
2248 if (less_rx-- == np->first_rx.ex)
2249 less_rx = np->last_rx.ex;
2251 while (np->put_rx.ex != less_rx) {
2252 skb = dev_alloc_skb(np->rx_buf_sz + NV_RX_ALLOC_PAD);
2253 if (skb) {
2254 skb->dev = dev;
2255 np->put_rx_ctx->skb = skb;
2256 #if NVVER > FEDORA7
2257 np->put_rx_ctx->dma = pci_map_single(np->pci_dev, skb->data,
2258 skb_tailroom(skb), PCI_DMA_FROMDEVICE);
2259 np->put_rx_ctx->dma_len = skb_tailroom(skb);
2260 #else
2261 np->put_rx_ctx->dma = pci_map_single(np->pci_dev, skb->data,
2262 skb->end-skb->data, PCI_DMA_FROMDEVICE);
2263 np->put_rx_ctx->dma_len = skb->end-skb->data;
2264 #endif
2265 np->put_rx.ex->PacketBufferHigh = cpu_to_le64(np->put_rx_ctx->dma) >> 32;
2266 np->put_rx.ex->PacketBufferLow = cpu_to_le64(np->put_rx_ctx->dma) & 0x0FFFFFFFF;
2267 wmb();
2268 np->put_rx.ex->FlagLen = cpu_to_le32(np->rx_buf_sz | NV_RX2_AVAIL);
2269 if (unlikely(np->put_rx.ex++ == np->last_rx.ex))
2270 np->put_rx.ex = np->first_rx.ex;
2271 if (unlikely(np->put_rx_ctx++ == np->last_rx_ctx))
2272 np->put_rx_ctx = np->first_rx_ctx;
2273 } else {
2274 return 1;
2277 return 0;
2281 static void nv_do_rx_refill(unsigned long data)
2283 struct net_device *dev = (struct net_device *) data;
2284 struct fe_priv *np = get_nvpriv(dev);
2285 int retcode;
2287 spin_lock_irq(&np->timer_lock);
2288 if (!using_multi_irqs(dev)) {
2289 if (np->msi_flags & NV_MSI_X_ENABLED)
2290 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
2291 else
2292 disable_irq(np->pci_dev->irq);
2293 } else {
2294 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
2297 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
2298 retcode = nv_alloc_rx(dev);
2299 else
2300 retcode = nv_alloc_rx_optimized(dev);
2301 if (retcode) {
2302 spin_lock_irq(&np->lock);
2303 if (!np->in_shutdown)
2304 mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
2305 spin_unlock_irq(&np->lock);
2307 if (!using_multi_irqs(dev)) {
2308 if (np->msi_flags & NV_MSI_X_ENABLED)
2309 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
2310 else
2311 enable_irq(np->pci_dev->irq);
2312 } else {
2313 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
2315 spin_unlock_irq(&np->timer_lock);
2318 static void nv_init_rx(struct net_device *dev)
2320 struct fe_priv *np = get_nvpriv(dev);
2321 int i;
2323 np->get_rx = np->put_rx = np->first_rx = np->rx_ring;
2324 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
2325 np->last_rx.orig = &np->rx_ring.orig[np->rx_ring_size-1];
2326 else
2327 np->last_rx.ex = &np->rx_ring.ex[np->rx_ring_size-1];
2328 np->get_rx_ctx = np->put_rx_ctx = np->first_rx_ctx = np->rx_skb;
2329 np->last_rx_ctx = &np->rx_skb[np->rx_ring_size-1];
2331 for (i = 0; i < np->rx_ring_size; i++) {
2332 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
2333 np->rx_ring.orig[i].FlagLen = 0;
2334 np->rx_ring.orig[i].PacketBuffer = 0;
2335 } else {
2336 np->rx_ring.ex[i].FlagLen = 0;
2337 np->rx_ring.ex[i].TxVlan = 0;
2338 np->rx_ring.ex[i].PacketBufferHigh = 0;
2339 np->rx_ring.ex[i].PacketBufferLow = 0;
2341 np->rx_skb[i].skb = NULL;
2342 np->rx_skb[i].dma = 0;
2346 static void nv_init_tx(struct net_device *dev)
2348 struct fe_priv *np = get_nvpriv(dev);
2349 int i;
2351 np->get_tx = np->put_tx = np->first_tx = np->tx_ring;
2352 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
2353 np->last_tx.orig = &np->tx_ring.orig[np->tx_ring_size-1];
2354 else
2355 np->last_tx.ex = &np->tx_ring.ex[np->tx_ring_size-1];
2356 np->get_tx_ctx = np->put_tx_ctx = np->first_tx_ctx = np->tx_skb;
2357 np->last_tx_ctx = &np->tx_skb[np->tx_ring_size-1];
2359 for (i = 0; i < np->tx_ring_size; i++) {
2360 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
2361 np->tx_ring.orig[i].FlagLen = 0;
2362 np->tx_ring.orig[i].PacketBuffer = 0;
2363 } else {
2364 np->tx_ring.ex[i].FlagLen = 0;
2365 np->tx_ring.ex[i].TxVlan = 0;
2366 np->tx_ring.ex[i].PacketBufferHigh = 0;
2367 np->tx_ring.ex[i].PacketBufferLow = 0;
2369 np->tx_skb[i].skb = NULL;
2370 np->tx_skb[i].dma = 0;
2374 static int nv_init_ring(struct net_device *dev)
2376 struct fe_priv *np = get_nvpriv(dev);
2377 nv_init_tx(dev);
2378 nv_init_rx(dev);
2379 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
2380 return nv_alloc_rx(dev);
2381 else
2382 return nv_alloc_rx_optimized(dev);
2385 static int nv_release_txskb(struct net_device *dev, unsigned int skbnr)
2387 struct fe_priv *np = get_nvpriv(dev);
2389 dprintk(KERN_INFO "%s: nv_release_txskb for skbnr %d\n",
2390 dev->name, skbnr);
2392 if (np->tx_skb[skbnr].dma) {
2393 pci_unmap_page(np->pci_dev, np->tx_skb[skbnr].dma,
2394 np->tx_skb[skbnr].dma_len,
2395 PCI_DMA_TODEVICE);
2396 np->tx_skb[skbnr].dma = 0;
2398 if (np->tx_skb[skbnr].skb) {
2399 dev_kfree_skb_any(np->tx_skb[skbnr].skb);
2400 np->tx_skb[skbnr].skb = NULL;
2401 return 1;
2402 } else {
2403 return 0;
2407 static void nv_drain_tx(struct net_device *dev)
2409 struct fe_priv *np = get_nvpriv(dev);
2410 unsigned int i;
2412 for (i = 0; i < np->tx_ring_size; i++) {
2413 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
2414 np->tx_ring.orig[i].FlagLen = 0;
2415 np->tx_ring.orig[i].PacketBuffer = 0;
2416 } else {
2417 np->tx_ring.ex[i].FlagLen = 0;
2418 np->tx_ring.ex[i].TxVlan = 0;
2419 np->tx_ring.ex[i].PacketBufferHigh = 0;
2420 np->tx_ring.ex[i].PacketBufferLow = 0;
2422 if (nv_release_txskb(dev, i))
2423 np->stats.tx_dropped++;
2427 static void nv_drain_rx(struct net_device *dev)
2429 struct fe_priv *np = get_nvpriv(dev);
2430 int i;
2431 for (i = 0; i < np->rx_ring_size; i++) {
2432 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
2433 np->rx_ring.orig[i].FlagLen = 0;
2434 np->rx_ring.orig[i].PacketBuffer = 0;
2435 } else {
2436 np->rx_ring.ex[i].FlagLen = 0;
2437 np->rx_ring.ex[i].TxVlan = 0;
2438 np->rx_ring.ex[i].PacketBufferHigh = 0;
2439 np->rx_ring.ex[i].PacketBufferLow = 0;
2441 wmb();
2442 if (np->rx_skb[i].skb) {
2443 #if NVVER > FEDORA7
2444 pci_unmap_single(np->pci_dev, np->rx_skb[i].dma,
2445 (skb_end_pointer(np->rx_skb[i].skb) - np->rx_skb[i].skb->data),
2446 PCI_DMA_FROMDEVICE);
2447 #else
2448 pci_unmap_single(np->pci_dev, np->rx_skb[i].dma,
2449 np->rx_skb[i].skb->end-np->rx_skb[i].skb->data,
2450 PCI_DMA_FROMDEVICE);
2451 #endif
2452 dev_kfree_skb(np->rx_skb[i].skb);
2453 np->rx_skb[i].skb = NULL;
2458 static void drain_ring(struct net_device *dev)
2460 nv_drain_tx(dev);
2461 nv_drain_rx(dev);
2464 /*
2465 * nv_start_xmit: dev->hard_start_xmit function
2466 * Called with dev->xmit_lock held.
2467 */
2468 static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
2470 struct fe_priv *np = get_nvpriv(dev);
2471 u32 tx_flags = 0;
2472 u32 tx_flags_extra = (np->desc_ver == DESC_VER_1 ? NV_TX_LASTPACKET : NV_TX2_LASTPACKET);
2473 unsigned int fragments = skb_shinfo(skb)->nr_frags;
2474 unsigned int i;
2475 u32 offset = 0;
2476 u32 bcnt;
2477 u32 size = skb->len-skb->data_len;
2478 u32 entries = (size >> NV_TX2_TSO_MAX_SHIFT) + ((size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
2479 u32 empty_slots;
2480 struct ring_desc* put_tx;
2481 struct ring_desc* start_tx;
2482 struct ring_desc* prev_tx;
2483 struct nv_skb_map* prev_tx_ctx;
2485 dprintk("%s:%s\n",dev->name,__FUNCTION__);
2486 /* add fragments to entries count */
2487 for (i = 0; i < fragments; i++) {
2488 entries += (skb_shinfo(skb)->frags[i].size >> NV_TX2_TSO_MAX_SHIFT) +
2489 ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
2492 empty_slots = (u32)(np->tx_ring_size - ((np->tx_ring_size + (np->put_tx_ctx - np->get_tx_ctx)) % np->tx_ring_size));
2493 if (likely(empty_slots > entries)) {
2495 start_tx = put_tx = np->put_tx.orig;
2497 /* setup the header buffer */
2498 do {
2499 prev_tx = put_tx;
2500 prev_tx_ctx = np->put_tx_ctx;
2501 bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
2502 np->put_tx_ctx->dma = pci_map_single(np->pci_dev, skb->data + offset, bcnt,
2503 PCI_DMA_TODEVICE);
2504 np->put_tx_ctx->dma_len = bcnt;
2505 put_tx->PacketBuffer = cpu_to_le32(np->put_tx_ctx->dma);
2506 put_tx->FlagLen = cpu_to_le32((bcnt-1) | tx_flags);
2508 tx_flags = np->tx_flags;
2509 offset += bcnt;
2510 size -= bcnt;
2511 if (unlikely(put_tx++ == np->last_tx.orig))
2512 put_tx = np->first_tx.orig;
2513 if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx))
2514 np->put_tx_ctx = np->first_tx_ctx;
2515 } while(size);
2517 /* setup the fragments */
2518 for (i = 0; i < fragments; i++) {
2519 skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
2520 u32 size = frag->size;
2521 offset = 0;
2523 do {
2524 prev_tx = put_tx;
2525 prev_tx_ctx = np->put_tx_ctx;
2526 bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
2528 np->put_tx_ctx->dma = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt,
2529 PCI_DMA_TODEVICE);
2530 np->put_tx_ctx->dma_len = bcnt;
2532 put_tx->PacketBuffer = cpu_to_le32(np->put_tx_ctx->dma);
2533 put_tx->FlagLen = cpu_to_le32((bcnt-1) | tx_flags);
2534 offset += bcnt;
2535 size -= bcnt;
2536 if (unlikely(put_tx++ == np->last_tx.orig))
2537 put_tx = np->first_tx.orig;
2538 if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx))
2539 np->put_tx_ctx = np->first_tx_ctx;
2540 } while (size);
2543 /* set last fragment flag */
2544 prev_tx->FlagLen |= cpu_to_le32(tx_flags_extra);
2546 /* save skb in this slot's context area */
2547 prev_tx_ctx->skb = skb;
2549 #ifdef NETIF_F_TSO
2550 #if NVVER > FEDORA5
2551 if (skb_shinfo(skb)->gso_size)
2552 tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->gso_size << NV_TX2_TSO_SHIFT);
2553 #else
2554 if (skb_shinfo(skb)->tso_size)
2555 tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->tso_size << NV_TX2_TSO_SHIFT);
2556 #endif
2557 else
2558 #endif
2559 tx_flags_extra = (skb->ip_summed == CHECKSUM_HW ? (NV_TX2_CHECKSUM_L3|NV_TX2_CHECKSUM_L4) : 0);
2561 spin_lock_irq(&np->lock);
2563 /* set tx flags */
2564 start_tx->FlagLen |= cpu_to_le32(tx_flags | tx_flags_extra);
2565 np->put_tx.orig = put_tx;
2567 spin_unlock_irq(&np->lock);
2569 dev->trans_start = jiffies;
2570 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
2571 return NETDEV_TX_OK;
2572 } else {
2573 spin_lock_irq(&np->lock);
2574 netif_stop_queue(dev);
2575 np->stop_tx = 1;
2576 spin_unlock_irq(&np->lock);
2577 return NETDEV_TX_BUSY;
2581 static int nv_start_xmit_optimized(struct sk_buff *skb, struct net_device *dev)
2583 struct fe_priv *np = get_nvpriv(dev);
2584 u32 tx_flags = 0;
2585 u32 tx_flags_extra;
2586 unsigned int fragments = skb_shinfo(skb)->nr_frags;
2587 unsigned int i;
2588 u32 offset = 0;
2589 u32 bcnt;
2590 u32 size = skb->len-skb->data_len;
2591 u32 empty_slots;
2592 struct ring_desc_ex* put_tx;
2593 struct ring_desc_ex* start_tx;
2594 struct ring_desc_ex* prev_tx;
2595 struct nv_skb_map* prev_tx_ctx;
2597 u32 entries = (size >> NV_TX2_TSO_MAX_SHIFT) + ((size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
2599 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
2600 /* add fragments to entries count */
2601 for (i = 0; i < fragments; i++) {
2602 entries += (skb_shinfo(skb)->frags[i].size >> NV_TX2_TSO_MAX_SHIFT) +
2603 ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
2606 empty_slots = (u32)(np->tx_ring_size - ((np->tx_ring_size + (np->put_tx_ctx - np->get_tx_ctx)) % np->tx_ring_size));
2607 if (likely(empty_slots > entries)) {
2609 start_tx = put_tx = np->put_tx.ex;
2611 /* setup the header buffer */
2612 do {
2613 prev_tx = put_tx;
2614 prev_tx_ctx = np->put_tx_ctx;
2615 bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
2616 np->put_tx_ctx->dma = pci_map_single(np->pci_dev, skb->data + offset, bcnt,
2617 PCI_DMA_TODEVICE);
2618 np->put_tx_ctx->dma_len = bcnt;
2619 put_tx->PacketBufferHigh = cpu_to_le64(np->put_tx_ctx->dma) >> 32;
2620 put_tx->PacketBufferLow = cpu_to_le64(np->put_tx_ctx->dma) & 0x0FFFFFFFF;
2621 put_tx->FlagLen = cpu_to_le32((bcnt-1) | tx_flags);
2623 tx_flags = NV_TX2_VALID;
2624 offset += bcnt;
2625 size -= bcnt;
2626 if (unlikely(put_tx++ == np->last_tx.ex))
2627 put_tx = np->first_tx.ex;
2628 if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx))
2629 np->put_tx_ctx = np->first_tx_ctx;
2630 } while(size);
2631 /* setup the fragments */
2632 for (i = 0; i < fragments; i++) {
2633 skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
2634 u32 size = frag->size;
2635 offset = 0;
2637 do {
2638 prev_tx = put_tx;
2639 prev_tx_ctx = np->put_tx_ctx;
2640 bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
2642 np->put_tx_ctx->dma = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt,
2643 PCI_DMA_TODEVICE);
2644 np->put_tx_ctx->dma_len = bcnt;
2646 put_tx->PacketBufferHigh = cpu_to_le64(np->put_tx_ctx->dma) >> 32;
2647 put_tx->PacketBufferLow = cpu_to_le64(np->put_tx_ctx->dma) & 0x0FFFFFFFF;
2648 put_tx->FlagLen = cpu_to_le32((bcnt-1) | tx_flags);
2649 offset += bcnt;
2650 size -= bcnt;
2651 if (unlikely(put_tx++ == np->last_tx.ex))
2652 put_tx = np->first_tx.ex;
2653 if (unlikely(np->put_tx_ctx++ == np->last_tx_ctx))
2654 np->put_tx_ctx = np->first_tx_ctx;
2655 } while (size);
2658 /* set last fragment flag */
2659 prev_tx->FlagLen |= cpu_to_le32(NV_TX2_LASTPACKET);
2661 /* save skb in this slot's context area */
2662 prev_tx_ctx->skb = skb;
2664 #ifdef NETIF_F_TSO
2665 #if NVVER > FEDORA5
2666 if (skb_shinfo(skb)->gso_size)
2667 tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->gso_size << NV_TX2_TSO_SHIFT);
2668 #else
2669 if (skb_shinfo(skb)->tso_size)
2670 tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->tso_size << NV_TX2_TSO_SHIFT);
2671 #endif
2672 else
2673 #endif
2674 tx_flags_extra = (skb->ip_summed == CHECKSUM_HW ? (NV_TX2_CHECKSUM_L3|NV_TX2_CHECKSUM_L4) : 0);
2676 /* vlan tag */
2677 if (likely(!np->vlangrp)) {
2678 start_tx->TxVlan = 0;
2679 } else {
2680 if (vlan_tx_tag_present(skb))
2681 start_tx->TxVlan = cpu_to_le32(NV_TX3_VLAN_TAG_PRESENT | vlan_tx_tag_get(skb));
2682 else
2683 start_tx->TxVlan = 0;
2686 spin_lock_irq(&np->lock);
2688 /* set tx flags */
2689 start_tx->FlagLen |= cpu_to_le32(tx_flags | tx_flags_extra);
2690 np->put_tx.ex = put_tx;
2692 spin_unlock_irq(&np->lock);
2694 dev->trans_start = jiffies;
2695 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
2696 return NETDEV_TX_OK;
2698 } else {
2699 spin_lock_irq(&np->lock);
2700 netif_stop_queue(dev);
2701 np->stop_tx = 1;
2702 spin_unlock_irq(&np->lock);
2703 return NETDEV_TX_BUSY;
2707 /*
2708 * nv_tx_done: check for completed packets, release the skbs.
2710 * Caller must own np->lock.
2711 */
2712 static inline void nv_tx_done(struct net_device *dev)
2714 struct fe_priv *np = get_nvpriv(dev);
2715 u32 Flags;
2716 struct ring_desc* orig_get_tx = np->get_tx.orig;
2717 struct ring_desc* put_tx = np->put_tx.orig;
2719 dprintk("%s:%s\n",dev->name,__FUNCTION__);
2720 while ((np->get_tx.orig != put_tx) &&
2721 !((Flags = le32_to_cpu(np->get_tx.orig->FlagLen)) & NV_TX_VALID)) {
2722 dprintk(KERN_DEBUG "%s: nv_tx_done:NVLAN tx done\n", dev->name);
2724 pci_unmap_page(np->pci_dev, np->get_tx_ctx->dma,
2725 np->get_tx_ctx->dma_len,
2726 PCI_DMA_TODEVICE);
2727 np->get_tx_ctx->dma = 0;
2729 if (np->desc_ver == DESC_VER_1) {
2730 if (Flags & NV_TX_LASTPACKET) {
2731 if (Flags & NV_TX_ERROR) {
2732 if (Flags & NV_TX_UNDERFLOW)
2733 np->stats.tx_fifo_errors++;
2734 if (Flags & NV_TX_CARRIERLOST)
2735 np->stats.tx_carrier_errors++;
2736 np->stats.tx_errors++;
2737 } else {
2738 np->stats.tx_packets++;
2739 np->stats.tx_bytes += np->get_tx_ctx->skb->len;
2741 dev_kfree_skb_any(np->get_tx_ctx->skb);
2742 np->get_tx_ctx->skb = NULL;
2745 } else {
2746 if (Flags & NV_TX2_LASTPACKET) {
2747 if (Flags & NV_TX2_ERROR) {
2748 if (Flags & NV_TX2_UNDERFLOW)
2749 np->stats.tx_fifo_errors++;
2750 if (Flags & NV_TX2_CARRIERLOST)
2751 np->stats.tx_carrier_errors++;
2752 np->stats.tx_errors++;
2753 } else {
2754 np->stats.tx_packets++;
2755 np->stats.tx_bytes += np->get_tx_ctx->skb->len;
2757 dev_kfree_skb_any(np->get_tx_ctx->skb);
2758 np->get_tx_ctx->skb = NULL;
2762 if (unlikely(np->get_tx.orig++ == np->last_tx.orig))
2763 np->get_tx.orig = np->first_tx.orig;
2764 if (unlikely(np->get_tx_ctx++ == np->last_tx_ctx))
2765 np->get_tx_ctx = np->first_tx_ctx;
2767 if (unlikely((np->stop_tx == 1) && (np->get_tx.orig != orig_get_tx))) {
2768 np->stop_tx = 0;
2769 netif_wake_queue(dev);
2773 static inline void nv_tx_done_optimized(struct net_device *dev, int max_work)
2775 struct fe_priv *np = get_nvpriv(dev);
2776 u32 Flags;
2777 struct ring_desc_ex* orig_get_tx = np->get_tx.ex;
2778 struct ring_desc_ex* put_tx = np->put_tx.ex;
2780 while ((np->get_tx.ex != put_tx) &&
2781 !((Flags = le32_to_cpu(np->get_tx.ex->FlagLen)) & NV_TX_VALID) &&
2782 (max_work-- > 0)) {
2783 dprintk(KERN_DEBUG "%s: nv_tx_done_optimized:NVLAN tx done\n", dev->name);
2785 pci_unmap_page(np->pci_dev, np->get_tx_ctx->dma,
2786 np->get_tx_ctx->dma_len,
2787 PCI_DMA_TODEVICE);
2788 np->get_tx_ctx->dma = 0;
2790 if (Flags & NV_TX2_LASTPACKET) {
2791 if (!(Flags & NV_TX2_ERROR)) {
2792 np->stats.tx_packets++;
2794 dev_kfree_skb_any(np->get_tx_ctx->skb);
2795 np->get_tx_ctx->skb = NULL;
2798 if (unlikely(np->get_tx.ex++ == np->last_tx.ex))
2799 np->get_tx.ex = np->first_tx.ex;
2800 if (unlikely(np->get_tx_ctx++ == np->last_tx_ctx))
2801 np->get_tx_ctx = np->first_tx_ctx;
2803 if (unlikely((np->stop_tx == 1) && (np->get_tx.ex != orig_get_tx))) {
2804 np->stop_tx = 0;
2805 netif_wake_queue(dev);
2809 /*
2810 * nv_tx_timeout: dev->tx_timeout function
2811 * Called with dev->xmit_lock held.
2813 */
2814 static void nv_tx_timeout(struct net_device *dev)
2816 struct fe_priv *np = get_nvpriv(dev);
2817 u8 __iomem *base = get_hwbase(dev);
2818 u32 status;
2820 if (!netif_running(dev))
2821 return;
2823 if (np->msi_flags & NV_MSI_X_ENABLED)
2824 status = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
2825 else
2826 status = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
2828 printk(KERN_INFO "%s: Got tx_timeout. irq: %08x\n", dev->name, status);
2831 int i;
2833 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
2834 printk(KERN_INFO "%s: Ring at %lx: get %lx put %lx\n",
2835 dev->name, (unsigned long)np->tx_ring.orig,
2836 (unsigned long)np->get_tx.orig, (unsigned long)np->put_tx.orig);
2837 } else {
2838 printk(KERN_INFO "%s: Ring at %lx: get %lx put %lx\n",
2839 dev->name, (unsigned long)np->tx_ring.ex,
2840 (unsigned long)np->get_tx.ex, (unsigned long)np->put_tx.ex);
2842 printk(KERN_INFO "%s: Dumping tx registers\n", dev->name);
2843 for (i=0;i<=np->register_size;i+= 32) {
2844 printk(KERN_INFO "%3x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
2845 i,
2846 readl(base + i + 0), readl(base + i + 4),
2847 readl(base + i + 8), readl(base + i + 12),
2848 readl(base + i + 16), readl(base + i + 20),
2849 readl(base + i + 24), readl(base + i + 28));
2851 printk(KERN_INFO "%s: Dumping tx ring\n", dev->name);
2852 for (i=0;i<np->tx_ring_size;i+= 4) {
2853 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
2854 printk(KERN_INFO "%03x: %08x %08x // %08x %08x // %08x %08x // %08x %08x\n",
2855 i,
2856 le32_to_cpu(np->tx_ring.orig[i].PacketBuffer),
2857 le32_to_cpu(np->tx_ring.orig[i].FlagLen),
2858 le32_to_cpu(np->tx_ring.orig[i+1].PacketBuffer),
2859 le32_to_cpu(np->tx_ring.orig[i+1].FlagLen),
2860 le32_to_cpu(np->tx_ring.orig[i+2].PacketBuffer),
2861 le32_to_cpu(np->tx_ring.orig[i+2].FlagLen),
2862 le32_to_cpu(np->tx_ring.orig[i+3].PacketBuffer),
2863 le32_to_cpu(np->tx_ring.orig[i+3].FlagLen));
2864 } else {
2865 printk(KERN_INFO "%03x: %08x %08x %08x // %08x %08x %08x // %08x %08x %08x // %08x %08x %08x\n",
2866 i,
2867 le32_to_cpu(np->tx_ring.ex[i].PacketBufferHigh),
2868 le32_to_cpu(np->tx_ring.ex[i].PacketBufferLow),
2869 le32_to_cpu(np->tx_ring.ex[i].FlagLen),
2870 le32_to_cpu(np->tx_ring.ex[i+1].PacketBufferHigh),
2871 le32_to_cpu(np->tx_ring.ex[i+1].PacketBufferLow),
2872 le32_to_cpu(np->tx_ring.ex[i+1].FlagLen),
2873 le32_to_cpu(np->tx_ring.ex[i+2].PacketBufferHigh),
2874 le32_to_cpu(np->tx_ring.ex[i+2].PacketBufferLow),
2875 le32_to_cpu(np->tx_ring.ex[i+2].FlagLen),
2876 le32_to_cpu(np->tx_ring.ex[i+3].PacketBufferHigh),
2877 le32_to_cpu(np->tx_ring.ex[i+3].PacketBufferLow),
2878 le32_to_cpu(np->tx_ring.ex[i+3].FlagLen));
2883 nv_disable_irq(dev);
2884 spin_lock_irq(&np->lock);
2886 /* 1) stop tx engine */
2887 nv_stop_tx(dev);
2889 /* 2) check that the packets were not sent already: */
2890 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
2891 nv_tx_done(dev);
2892 else
2893 nv_tx_done_optimized(dev, np->tx_ring_size);
2895 /* 3) if there are dead entries: clear everything */
2896 if (np->get_tx_ctx != np->put_tx_ctx) {
2897 printk(KERN_DEBUG "%s: tx_timeout: dead entries!\n", dev->name);
2898 nv_drain_tx(dev);
2899 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
2900 np->get_tx.orig = np->put_tx.orig = np->first_tx.orig;
2901 else
2902 np->get_tx.ex = np->put_tx.ex = np->first_tx.ex;
2903 np->get_tx_ctx = np->put_tx_ctx = np->first_tx_ctx;
2904 setup_hw_rings(dev, NV_SETUP_TX_RING);
2907 netif_wake_queue(dev);
2908 /* 4) restart tx engine */
2909 nv_start_tx(dev);
2911 spin_unlock_irq(&np->lock);
2912 nv_enable_irq(dev);
2915 /*
2916 * Called when the nic notices a mismatch between the actual data len on the
2917 * wire and the len indicated in the 802 header
2918 */
2919 static int nv_getlen(struct net_device *dev, void *packet, int datalen)
2921 int hdrlen; /* length of the 802 header */
2922 int protolen; /* length as stored in the proto field */
2924 /* 1) calculate len according to header */
2925 if ( ((struct vlan_ethhdr *)packet)->h_vlan_proto == __constant_htons(ETH_P_8021Q)) {
2926 protolen = ntohs( ((struct vlan_ethhdr *)packet)->h_vlan_encapsulated_proto );
2927 hdrlen = VLAN_HLEN;
2928 } else {
2929 protolen = ntohs( ((struct ethhdr *)packet)->h_proto);
2930 hdrlen = ETH_HLEN;
2932 dprintk(KERN_DEBUG "%s: nv_getlen: datalen %d, protolen %d, hdrlen %d\n",
2933 dev->name, datalen, protolen, hdrlen);
2934 if (protolen > ETH_DATA_LEN)
2935 return datalen; /* Value in proto field not a len, no checks possible */
2937 protolen += hdrlen;
2938 /* consistency checks: */
2939 if (datalen > ETH_ZLEN) {
2940 if (datalen >= protolen) {
2941 /* more data on wire than in 802 header, trim of
2942 * additional data.
2943 */
2944 dprintk(KERN_DEBUG "%s: nv_getlen: accepting %d bytes.\n",
2945 dev->name, protolen);
2946 return protolen;
2947 } else {
2948 /* less data on wire than mentioned in header.
2949 * Discard the packet.
2950 */
2951 dprintk(KERN_DEBUG "%s: nv_getlen: discarding long packet.\n",
2952 dev->name);
2953 return -1;
2955 } else {
2956 /* short packet. Accept only if 802 values are also short */
2957 if (protolen > ETH_ZLEN) {
2958 dprintk(KERN_DEBUG "%s: nv_getlen: discarding short packet.\n",
2959 dev->name);
2960 return -1;
2962 dprintk(KERN_DEBUG "%s: nv_getlen: accepting %d bytes.\n",
2963 dev->name, datalen);
2964 return datalen;
2968 static inline void nv_rx_process(struct net_device *dev)
2970 struct fe_priv *np = get_nvpriv(dev);
2971 u32 Flags;
2972 struct sk_buff *skb;
2973 int len;
2975 dprintk("%s:%s\n",dev->name,__FUNCTION__);
2976 while((np->get_rx.orig != np->put_rx.orig) &&
2977 !((Flags = le32_to_cpu(np->get_rx.orig->FlagLen)) & NV_RX_AVAIL)) {
2979 pci_unmap_single(np->pci_dev, np->get_rx_ctx->dma,
2980 np->get_rx_ctx->dma_len,
2981 PCI_DMA_FROMDEVICE);
2983 skb = np->get_rx_ctx->skb;
2984 np->get_rx_ctx->skb = NULL;
2987 int j;
2988 dprintk(KERN_DEBUG "Dumping packet (flags 0x%x).",Flags);
2989 for (j=0; j<64; j++) {
2990 if ((j%16) == 0)
2991 dprintk("\n%03x:", j);
2992 dprintk(" %02x", ((unsigned char*)skb->data)[j]);
2994 dprintk("\n");
2997 if (np->desc_ver == DESC_VER_1) {
2999 if (likely(Flags & NV_RX_DESCRIPTORVALID)) {
3000 len = Flags & LEN_MASK_V1;
3001 if (unlikely(Flags & NV_RX_ERROR)) {
3002 if (Flags & NV_RX_ERROR4) {
3003 len = nv_getlen(dev, skb->data, len);
3004 if (len < 0 || len > np->rx_buf_sz) {
3005 np->stats.rx_errors++;
3006 dev_kfree_skb(skb);
3007 goto next_pkt;
3010 /* framing errors are soft errors */
3011 else if (Flags & NV_RX_FRAMINGERR) {
3012 if (Flags & NV_RX_SUBSTRACT1) {
3013 len--;
3016 /* the rest are hard errors */
3017 else {
3018 if (Flags & NV_RX_MISSEDFRAME)
3019 np->stats.rx_missed_errors++;
3020 if (Flags & NV_RX_CRCERR)
3021 np->stats.rx_crc_errors++;
3022 if (Flags & NV_RX_OVERFLOW)
3023 np->stats.rx_over_errors++;
3024 np->stats.rx_errors++;
3025 dev_kfree_skb(skb);
3026 goto next_pkt;
3029 } else {
3030 dev_kfree_skb(skb);
3031 goto next_pkt;
3033 } else {
3034 if (likely(Flags & NV_RX2_DESCRIPTORVALID)) {
3035 len = Flags & LEN_MASK_V2;
3036 if (unlikely(Flags & NV_RX2_ERROR)) {
3037 if (Flags & NV_RX2_ERROR4) {
3038 len = nv_getlen(dev, skb->data, len);
3039 if (len < 0 || len > np->rx_buf_sz) {
3040 np->stats.rx_errors++;
3041 dev_kfree_skb(skb);
3042 goto next_pkt;
3045 /* framing errors are soft errors */
3046 else if (Flags & NV_RX2_FRAMINGERR) {
3047 if (Flags & NV_RX2_SUBSTRACT1) {
3048 len--;
3051 /* the rest are hard errors */
3052 else {
3053 if (Flags & NV_RX2_CRCERR)
3054 np->stats.rx_crc_errors++;
3055 if (Flags & NV_RX2_OVERFLOW)
3056 np->stats.rx_over_errors++;
3057 np->stats.rx_errors++;
3058 dev_kfree_skb(skb);
3059 goto next_pkt;
3062 if (((Flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUM_IP_TCP) || ((Flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUM_IP_UDP))
3063 /*ip and tcp or udp */
3064 skb->ip_summed = CHECKSUM_UNNECESSARY;
3065 } else {
3066 dev_kfree_skb(skb);
3067 goto next_pkt;
3071 /* got a valid packet - forward it to the network core */
3072 dprintk(KERN_DEBUG "%s: nv_rx_process:NVLAN rx done\n", dev->name);
3073 skb_put(skb, len);
3074 skb->protocol = eth_type_trans(skb, dev);
3075 netif_rx(skb);
3076 dev->last_rx = jiffies;
3077 np->stats.rx_packets++;
3078 np->stats.rx_bytes += len;
3079 next_pkt:
3080 if (unlikely(np->get_rx.orig++ == np->last_rx.orig))
3081 np->get_rx.orig = np->first_rx.orig;
3082 if (unlikely(np->get_rx_ctx++ == np->last_rx_ctx))
3083 np->get_rx_ctx = np->first_rx_ctx;
3087 static inline int nv_rx_process_optimized(struct net_device *dev, int max_work)
3089 struct fe_priv *np = get_nvpriv(dev);
3090 u32 Flags;
3091 u32 vlanflags = 0;
3092 u32 rx_processed_cnt = 0;
3093 struct sk_buff *skb;
3094 int len;
3096 while((np->get_rx.ex != np->put_rx.ex) &&
3097 !((Flags = le32_to_cpu(np->get_rx.ex->FlagLen)) & NV_RX2_AVAIL) &&
3098 (rx_processed_cnt++ < max_work)) {
3100 pci_unmap_single(np->pci_dev, np->get_rx_ctx->dma,
3101 np->get_rx_ctx->dma_len,
3102 PCI_DMA_FROMDEVICE);
3104 skb = np->get_rx_ctx->skb;
3105 np->get_rx_ctx->skb = NULL;
3107 /* look at what we actually got: */
3108 if (likely(Flags & NV_RX2_DESCRIPTORVALID)) {
3109 len = Flags & LEN_MASK_V2;
3110 if (unlikely(Flags & NV_RX2_ERROR)) {
3111 if (Flags & NV_RX2_ERROR4) {
3112 len = nv_getlen(dev, skb->data, len);
3113 if (len < 0 || len > np->rx_buf_sz) {
3114 np->rx_len_errors++;
3115 dev_kfree_skb(skb);
3116 goto next_pkt;
3119 /* framing errors are soft errors */
3120 else if (Flags & NV_RX2_FRAMINGERR) {
3121 if (Flags & NV_RX2_SUBSTRACT1) {
3122 len--;
3125 /* the rest are hard errors */
3126 else {
3127 dev_kfree_skb(skb);
3128 goto next_pkt;
3132 if (likely(np->rx_csum)) {
3133 if (likely(((Flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUM_IP_TCP) || ((Flags & NV_RX2_CHECKSUMMASK) == NV_RX2_CHECKSUM_IP_UDP)))
3134 /*ip and tcp or udp */
3135 skb->ip_summed = CHECKSUM_UNNECESSARY;
3137 dprintk(KERN_DEBUG "%s: nv_rx_process_optimized:NVLAN rx done\n", dev->name);
3139 /* got a valid packet - forward it to the network core */
3140 skb_put(skb, len);
3141 skb->protocol = eth_type_trans(skb, dev);
3142 prefetch(skb->data);
3144 if (likely(!np->vlangrp)) {
3145 netif_rx(skb);
3146 } else {
3147 vlanflags = le32_to_cpu(np->get_rx.ex->PacketBufferLow);
3148 if (vlanflags & NV_RX3_VLAN_TAG_PRESENT)
3149 vlan_hwaccel_rx(skb, np->vlangrp, vlanflags & NV_RX3_VLAN_TAG_MASK);
3150 else
3151 netif_rx(skb);
3154 dev->last_rx = jiffies;
3155 np->stats.rx_packets++;
3156 np->stats.rx_bytes += len;
3157 } else {
3158 dev_kfree_skb(skb);
3160 next_pkt:
3161 if (unlikely(np->get_rx.ex++ == np->last_rx.ex))
3162 np->get_rx.ex = np->first_rx.ex;
3163 if (unlikely(np->get_rx_ctx++ == np->last_rx_ctx))
3164 np->get_rx_ctx = np->first_rx_ctx;
3166 return rx_processed_cnt;
3169 static void set_bufsize(struct net_device *dev)
3171 struct fe_priv *np = get_nvpriv(dev);
3173 if (dev->mtu <= ETH_DATA_LEN)
3174 np->rx_buf_sz = ETH_DATA_LEN + NV_RX_HEADERS;
3175 else
3176 np->rx_buf_sz = dev->mtu + NV_RX_HEADERS;
3179 /*
3180 * nv_change_mtu: dev->change_mtu function
3181 * Called with dev_base_lock held for read.
3182 */
3183 static int nv_change_mtu(struct net_device *dev, int new_mtu)
3185 struct fe_priv *np = get_nvpriv(dev);
3186 int old_mtu;
3188 if (new_mtu < 64 || new_mtu > np->pkt_limit)
3189 return -EINVAL;
3191 old_mtu = dev->mtu;
3192 dev->mtu = new_mtu;
3194 /* return early if the buffer sizes will not change */
3195 if (old_mtu <= ETH_DATA_LEN && new_mtu <= ETH_DATA_LEN)
3196 return 0;
3197 if (old_mtu == new_mtu)
3198 return 0;
3200 /* synchronized against open : rtnl_lock() held by caller */
3201 if (netif_running(dev)) {
3202 u8 __iomem *base = get_hwbase(dev);
3203 /*
3204 * It seems that the nic preloads valid ring entries into an
3205 * internal buffer. The procedure for flushing everything is
3206 * guessed, there is probably a simpler approach.
3207 * Changing the MTU is a rare event, it shouldn't matter.
3208 */
3209 nv_disable_hw_interrupts(dev,np->irqmask);
3210 nv_disable_irq(dev);
3211 #if NVVER > FEDORA5
3212 netif_tx_lock_bh(dev);
3213 #else
3214 spin_lock_bh(&dev->xmit_lock);
3215 #endif
3216 spin_lock(&np->lock);
3217 /* stop engines */
3218 nv_stop_rx(dev);
3219 nv_stop_tx(dev);
3220 nv_txrx_reset(dev);
3221 /* drain rx queue */
3222 nv_drain_rx(dev);
3223 nv_drain_tx(dev);
3224 /* reinit driver view of the rx queue */
3225 set_bufsize(dev);
3226 if (nv_init_ring(dev)) {
3227 if (!np->in_shutdown)
3228 mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
3230 /* reinit nic view of the rx queue */
3231 writel(np->rx_buf_sz, base + NvRegOffloadConfig);
3232 setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
3233 writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
3234 base + NvRegRingSizes);
3235 pci_push(base);
3236 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
3237 pci_push(base);
3239 /* restart rx engine */
3240 nv_start_rx(dev);
3241 nv_start_tx(dev);
3242 spin_unlock(&np->lock);
3243 #if NVVER > FEDORA5
3244 netif_tx_unlock_bh(dev);
3245 #else
3246 spin_unlock_bh(&dev->xmit_lock);
3247 #endif
3248 nv_enable_irq(dev);
3249 nv_enable_hw_interrupts(dev,np->irqmask);
3251 return 0;
3254 static void nv_copy_mac_to_hw(struct net_device *dev)
3256 u8 __iomem *base = get_hwbase(dev);
3257 u32 mac[2];
3259 mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) +
3260 (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24);
3261 mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8);
3262 writel(mac[0], base + NvRegMacAddrA);
3263 writel(mac[1], base + NvRegMacAddrB);
3267 /*
3268 * nv_set_mac_address: dev->set_mac_address function
3269 * Called with rtnl_lock() held.
3270 */
3271 static int nv_set_mac_address(struct net_device *dev, void *addr)
3273 struct fe_priv *np = get_nvpriv(dev);
3274 struct sockaddr *macaddr = (struct sockaddr*)addr;
3276 if(!is_valid_ether_addr(macaddr->sa_data))
3277 return -EADDRNOTAVAIL;
3279 dprintk("%s:%s\n",dev->name,__FUNCTION__);
3280 /* synchronized against open : rtnl_lock() held by caller */
3281 memcpy(dev->dev_addr, macaddr->sa_data, ETH_ALEN);
3283 if (netif_running(dev)) {
3284 #if NVVER > FEDORA5
3285 netif_tx_lock_bh(dev);
3286 #else
3287 spin_lock_bh(&dev->xmit_lock);
3288 #endif
3289 spin_lock_irq(&np->lock);
3291 /* stop rx engine */
3292 nv_stop_rx(dev);
3294 /* set mac address */
3295 nv_copy_mac_to_hw(dev);
3297 /* restart rx engine */
3298 nv_start_rx(dev);
3299 spin_unlock_irq(&np->lock);
3300 #if NVVER > FEDORA5
3301 netif_tx_unlock_bh(dev);
3302 #else
3303 spin_unlock_bh(&dev->xmit_lock);
3304 #endif
3305 } else {
3306 nv_copy_mac_to_hw(dev);
3308 return 0;
3311 /*
3312 * nv_set_multicast: dev->set_multicast function
3313 * Called with dev->xmit_lock held.
3314 */
3315 static void nv_set_multicast(struct net_device *dev)
3317 struct fe_priv *np = get_nvpriv(dev);
3318 u8 __iomem *base = get_hwbase(dev);
3319 u32 addr[2];
3320 u32 mask[2];
3321 u32 pff = readl(base + NvRegPacketFilterFlags) & NVREG_PFF_PAUSE_RX;
3323 memset(addr, 0, sizeof(addr));
3324 memset(mask, 0, sizeof(mask));
3326 if (dev->flags & IFF_PROMISC) {
3327 dprintk(KERN_DEBUG "%s: Promiscuous mode enabled.\n", dev->name);
3328 pff |= NVREG_PFF_PROMISC;
3329 } else {
3330 pff |= NVREG_PFF_MYADDR;
3332 if (dev->flags & IFF_ALLMULTI || dev->mc_list) {
3333 u32 alwaysOff[2];
3334 u32 alwaysOn[2];
3336 alwaysOn[0] = alwaysOn[1] = alwaysOff[0] = alwaysOff[1] = 0xffffffff;
3337 if (dev->flags & IFF_ALLMULTI) {
3338 alwaysOn[0] = alwaysOn[1] = alwaysOff[0] = alwaysOff[1] = 0;
3339 } else {
3340 struct dev_mc_list *walk;
3342 walk = dev->mc_list;
3343 while (walk != NULL) {
3344 u32 a, b;
3345 a = le32_to_cpu(*(u32 *) walk->dmi_addr);
3346 b = le16_to_cpu(*(u16 *) (&walk->dmi_addr[4]));
3347 alwaysOn[0] &= a;
3348 alwaysOff[0] &= ~a;
3349 alwaysOn[1] &= b;
3350 alwaysOff[1] &= ~b;
3351 walk = walk->next;
3354 addr[0] = alwaysOn[0];
3355 addr[1] = alwaysOn[1];
3356 mask[0] = alwaysOn[0] | alwaysOff[0];
3357 mask[1] = alwaysOn[1] | alwaysOff[1];
3358 } else {
3359 mask[0] = NVREG_MCASTMASKA_NONE;
3360 mask[1] = NVREG_MCASTMASKB_NONE;
3363 addr[0] |= NVREG_MCASTADDRA_FORCE;
3364 pff |= NVREG_PFF_ALWAYS;
3365 spin_lock_irq(&np->lock);
3366 nv_stop_rx(dev);
3367 writel(addr[0], base + NvRegMulticastAddrA);
3368 writel(addr[1], base + NvRegMulticastAddrB);
3369 writel(mask[0], base + NvRegMulticastMaskA);
3370 writel(mask[1], base + NvRegMulticastMaskB);
3371 writel(pff, base + NvRegPacketFilterFlags);
3372 dprintk(KERN_INFO "%s: reconfiguration for multicast lists.\n",
3373 dev->name);
3374 nv_start_rx(dev);
3375 spin_unlock_irq(&np->lock);
3378 static void nv_update_pause(struct net_device *dev, u32 pause_flags)
3380 struct fe_priv *np = get_nvpriv(dev);
3381 u8 __iomem *base = get_hwbase(dev);
3382 u32 pause_enable;
3384 np->pause_flags &= ~(NV_PAUSEFRAME_TX_ENABLE | NV_PAUSEFRAME_RX_ENABLE);
3386 if (np->pause_flags & NV_PAUSEFRAME_RX_CAPABLE) {
3387 u32 pff = readl(base + NvRegPacketFilterFlags) & ~NVREG_PFF_PAUSE_RX;
3388 if (pause_flags & NV_PAUSEFRAME_RX_ENABLE) {
3389 writel(pff|NVREG_PFF_PAUSE_RX, base + NvRegPacketFilterFlags);
3390 np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
3391 } else {
3392 writel(pff, base + NvRegPacketFilterFlags);
3395 if (np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE) {
3396 u32 regmisc = readl(base + NvRegMisc1) & ~NVREG_MISC1_PAUSE_TX;
3397 if (pause_flags & NV_PAUSEFRAME_TX_ENABLE) {
3398 pause_enable = NVREG_TX_PAUSEFRAME_ENABLE_V1;
3399 if(np->driver_data & DEV_HAS_PAUSEFRAME_TX_V2)
3400 pause_enable = NVREG_TX_PAUSEFRAME_ENABLE_V2;
3401 if(np->driver_data & DEV_HAS_PAUSEFRAME_TX_V3)
3402 pause_enable = NVREG_TX_PAUSEFRAME_ENABLE_V3;
3403 writel(pause_enable , base + NvRegTxPauseFrame);
3404 writel(regmisc|NVREG_MISC1_PAUSE_TX, base + NvRegMisc1);
3405 np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
3406 } else {
3407 writel(NVREG_TX_PAUSEFRAME_DISABLE, base + NvRegTxPauseFrame);
3408 writel(regmisc, base + NvRegMisc1);
3413 /**
3414 * nv_update_linkspeed: Setup the MAC according to the link partner
3415 * @dev: Network device to be configured
3417 * The function queries the PHY and checks if there is a link partner.
3418 * If yes, then it sets up the MAC accordingly. Otherwise, the MAC is
3419 * set to 10 MBit HD.
3421 * The function returns 0 if there is no link partner and 1 if there is
3422 * a good link partner.
3423 */
3424 static int nv_update_linkspeed(struct net_device *dev)
3426 struct fe_priv *np = get_nvpriv(dev);
3427 u8 __iomem *base = get_hwbase(dev);
3428 int adv = 0;
3429 int lpa = 0;
3430 int adv_lpa, adv_pause, lpa_pause;
3431 int newls = np->linkspeed;
3432 int newdup = np->duplex;
3433 int mii_status;
3434 int retval = 0;
3435 u32 control_1000, status_1000, phyreg, pause_flags, txreg;
3436 u32 txrxFlags = 0 ;
3438 /* BMSR_LSTATUS is latched, read it twice:
3439 * we want the current value.
3440 */
3441 mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
3442 mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
3444 if (!(mii_status & BMSR_LSTATUS)) {
3445 dprintk(KERN_DEBUG "%s: no link detected by phy - falling back to 10HD.\n",
3446 dev->name);
3447 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
3448 newdup = 0;
3449 retval = 0;
3450 goto set_speed;
3453 if (np->autoneg == AUTONEG_DISABLE) {
3454 dprintk(KERN_DEBUG "%s: nv_update_linkspeed: autoneg off, PHY set to 0x%04x.\n",
3455 dev->name, np->fixed_mode);
3456 if (np->fixed_mode & LPA_100FULL) {
3457 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
3458 newdup = 1;
3459 } else if (np->fixed_mode & LPA_100HALF) {
3460 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
3461 newdup = 0;
3462 } else if (np->fixed_mode & LPA_10FULL) {
3463 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
3464 newdup = 1;
3465 } else {
3466 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
3467 newdup = 0;
3469 retval = 1;
3470 goto set_speed;
3472 /* check auto negotiation is complete */
3473 if (!(mii_status & BMSR_ANEGCOMPLETE)) {
3474 /* still in autonegotiation - configure nic for 10 MBit HD and wait. */
3475 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
3476 newdup = 0;
3477 retval = 0;
3478 dprintk(KERN_DEBUG "%s: autoneg not completed - falling back to 10HD.\n", dev->name);
3479 goto set_speed;
3482 adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
3483 lpa = mii_rw(dev, np->phyaddr, MII_LPA, MII_READ);
3484 dprintk(KERN_DEBUG "%s: nv_update_linkspeed: PHY advertises 0x%04x, lpa 0x%04x.\n",
3485 dev->name, adv, lpa);
3486 retval = 1;
3487 if (np->gigabit == PHY_GIGABIT) {
3488 control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
3489 status_1000 = mii_rw(dev, np->phyaddr, MII_STAT1000, MII_READ);
3491 if ((control_1000 & ADVERTISE_1000FULL) &&
3492 (status_1000 & LPA_1000FULL)) {
3493 dprintk(KERN_DEBUG "%s: nv_update_linkspeed: GBit ethernet detected.\n",
3494 dev->name);
3495 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_1000;
3496 newdup = 1;
3497 goto set_speed;
3501 /* FIXME: handle parallel detection properly */
3502 adv_lpa = lpa & adv;
3503 if (adv_lpa & LPA_100FULL) {
3504 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
3505 newdup = 1;
3506 } else if (adv_lpa & LPA_100HALF) {
3507 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_100;
3508 newdup = 0;
3509 } else if (adv_lpa & LPA_10FULL) {
3510 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
3511 newdup = 1;
3512 } else if (adv_lpa & LPA_10HALF) {
3513 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
3514 newdup = 0;
3515 } else {
3516 dprintk(KERN_DEBUG "%s: bad ability %04x - falling back to 10HD.\n", dev->name, adv_lpa);
3517 newls = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
3518 newdup = 0;
3521 set_speed:
3522 if (np->duplex == newdup && np->linkspeed == newls)
3523 return retval;
3525 dprintk(KERN_INFO "%s: changing link setting from %d/%d to %d/%d.\n",
3526 dev->name, np->linkspeed, np->duplex, newls, newdup);
3528 np->duplex = newdup;
3529 np->linkspeed = newls;
3531 /* The transmitter and receiver must be restarted for safe update */
3532 if (readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_START) {
3533 txrxFlags |= NV_RESTART_TX;
3534 nv_stop_tx(dev);
3536 if (readl(base + NvRegReceiverControl) & NVREG_RCVCTL_START) {
3537 txrxFlags |= NV_RESTART_RX;
3538 nv_stop_rx(dev);
3542 if (np->gigabit == PHY_GIGABIT) {
3543 phyreg = readl(base + NvRegRandomSeed);
3544 phyreg &= ~(0x3FF00);
3545 if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_10)
3546 phyreg |= NVREG_RNDSEED_FORCE3;
3547 else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_100)
3548 phyreg |= NVREG_RNDSEED_FORCE2;
3549 else if ((np->linkspeed & 0xFFF) == NVREG_LINKSPEED_1000)
3550 phyreg |= NVREG_RNDSEED_FORCE;
3551 writel(phyreg, base + NvRegRandomSeed);
3554 phyreg = readl(base + NvRegPhyInterface);
3555 phyreg &= ~(PHY_HALF|PHY_100|PHY_1000);
3556 if (np->duplex == 0)
3557 phyreg |= PHY_HALF;
3558 if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_100)
3559 phyreg |= PHY_100;
3560 else if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000)
3561 phyreg |= PHY_1000;
3562 writel(phyreg, base + NvRegPhyInterface);
3564 if (phyreg & PHY_RGMII) {
3565 if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000)
3566 txreg = NVREG_TX_DEFERRAL_RGMII_1000;
3567 else
3568 txreg = NVREG_TX_DEFERRAL_RGMII_10_100;
3569 } else {
3570 txreg = NVREG_TX_DEFERRAL_DEFAULT;
3572 writel(txreg, base + NvRegTxDeferral);
3574 if (np->desc_ver == DESC_VER_1) {
3575 txreg = NVREG_TX_WM_DESC1_DEFAULT;
3576 } else {
3577 if ((np->linkspeed & NVREG_LINKSPEED_MASK) == NVREG_LINKSPEED_1000)
3578 txreg = NVREG_TX_WM_DESC2_3_1000;
3579 else
3580 txreg = NVREG_TX_WM_DESC2_3_DEFAULT;
3582 writel(txreg, base + NvRegTxWatermark);
3583 writel(NVREG_MISC1_FORCE | ( np->duplex ? 0 : NVREG_MISC1_HD),
3584 base + NvRegMisc1);
3585 pci_push(base);
3586 writel(np->linkspeed, base + NvRegLinkSpeed);
3587 pci_push(base);
3589 pause_flags = 0;
3590 /* setup pause frame */
3591 if (np->duplex != 0) {
3592 if (np->autoneg && np->pause_flags & NV_PAUSEFRAME_AUTONEG) {
3593 adv_pause = adv & (ADVERTISE_PAUSE_CAP| ADVERTISE_PAUSE_ASYM);
3594 lpa_pause = lpa & (LPA_PAUSE_CAP| LPA_PAUSE_ASYM);
3596 switch (adv_pause) {
3597 case (ADVERTISE_PAUSE_CAP):
3598 if (lpa_pause & LPA_PAUSE_CAP) {
3599 pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
3600 if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
3601 pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
3603 break;
3604 case (ADVERTISE_PAUSE_ASYM):
3605 if (lpa_pause == (LPA_PAUSE_CAP| LPA_PAUSE_ASYM))
3607 pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
3609 break;
3610 case (ADVERTISE_PAUSE_CAP| ADVERTISE_PAUSE_ASYM):
3611 if (lpa_pause & LPA_PAUSE_CAP)
3613 pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
3614 if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
3615 pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
3617 if (lpa_pause == LPA_PAUSE_ASYM)
3619 pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
3621 break;
3623 } else {
3624 pause_flags = np->pause_flags;
3627 nv_update_pause(dev, pause_flags);
3629 if (txrxFlags & NV_RESTART_TX)
3630 nv_start_tx(dev);
3631 if (txrxFlags & NV_RESTART_RX)
3632 nv_start_rx(dev);
3634 return retval;
3637 static void nv_linkchange(struct net_device *dev)
3639 if (nv_update_linkspeed(dev)) {
3640 if (!netif_carrier_ok(dev)) {
3641 netif_carrier_on(dev);
3642 printk(KERN_INFO "%s: link up.\n", dev->name);
3643 nv_start_rx(dev);
3645 } else {
3646 if (netif_carrier_ok(dev)) {
3647 netif_carrier_off(dev);
3648 printk(KERN_INFO "%s: link down.\n", dev->name);
3649 nv_stop_rx(dev);
3654 static void nv_link_irq(struct net_device *dev)
3656 u8 __iomem *base = get_hwbase(dev);
3657 u32 miistat;
3659 miistat = readl(base + NvRegMIIStatus);
3660 writel(NVREG_MIISTAT_LINKCHANGE, base + NvRegMIIStatus);
3661 dprintk(KERN_INFO "%s: link change irq, status 0x%x.\n", dev->name, miistat);
3663 if (miistat & (NVREG_MIISTAT_LINKCHANGE))
3664 nv_linkchange(dev);
3665 dprintk(KERN_DEBUG "%s: link change notification done.\n", dev->name);
3668 #if NVVER < FEDORA7
3669 static irqreturn_t nv_nic_irq(int foo, void *data, struct pt_regs *regs)
3670 #else
3671 static irqreturn_t nv_nic_irq(int foo, void *data)
3672 #endif
3674 struct net_device *dev = (struct net_device *) data;
3675 struct fe_priv *np = get_nvpriv(dev);
3676 u8 __iomem *base = get_hwbase(dev);
3677 u32 events,mask;
3678 int i;
3680 dprintk("%s:%s\n",dev->name,__FUNCTION__);
3682 for (i=0; ; i++) {
3683 if (!(np->msi_flags & NV_MSI_X_ENABLED)) {
3684 events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
3685 writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
3686 } else {
3687 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
3688 writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus);
3690 pci_push(base);
3691 dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
3692 mask = readl(base + NvRegIrqMask);
3693 if (!(events & mask))
3694 break;
3696 spin_lock(&np->lock);
3697 nv_tx_done(dev);
3698 spin_unlock(&np->lock);
3700 nv_rx_process(dev);
3701 if (nv_alloc_rx(dev)) {
3702 spin_lock(&np->lock);
3703 if (!np->in_shutdown)
3704 mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
3705 spin_unlock(&np->lock);
3708 if (events & NVREG_IRQ_LINK) {
3709 spin_lock(&np->lock);
3710 nv_link_irq(dev);
3711 spin_unlock(&np->lock);
3713 if (np->need_linktimer && time_after(jiffies, np->link_timeout)) {
3714 spin_lock(&np->lock);
3715 nv_linkchange(dev);
3716 spin_unlock(&np->lock);
3717 np->link_timeout = jiffies + LINK_TIMEOUT;
3719 if (events & (NVREG_IRQ_TX_ERR)) {
3720 dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n",
3721 dev->name, events);
3723 if (events & (NVREG_IRQ_UNKNOWN)) {
3724 printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n",
3725 dev->name, events);
3727 if (i > max_interrupt_work) {
3728 spin_lock(&np->lock);
3729 /* disable interrupts on the nic */
3730 if (!(np->msi_flags & NV_MSI_X_ENABLED))
3731 writel(0, base + NvRegIrqMask);
3732 else
3733 writel(np->irqmask, base + NvRegIrqMask);
3734 pci_push(base);
3736 if (!np->in_shutdown) {
3737 np->nic_poll_irq = np->irqmask;
3738 mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
3740 printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq.\n", dev->name, i);
3741 spin_unlock(&np->lock);
3742 break;
3746 dprintk(KERN_DEBUG "%s: nv_nic_irq completed\n", dev->name);
3748 return IRQ_RETVAL(i);
3751 #define TX_WORK_PER_LOOP 64
3752 #define RX_WORK_PER_LOOP 64
3753 #if NVVER < FEDORA7
3754 static irqreturn_t nv_nic_irq_optimized(int foo, void *data, struct pt_regs *regs)
3755 #else
3756 static irqreturn_t nv_nic_irq_optimized(int foo, void *data)
3757 #endif
3759 struct net_device *dev = (struct net_device *) data;
3760 struct fe_priv *np = get_nvpriv(dev);
3761 u8 __iomem *base = get_hwbase(dev);
3762 u32 events,mask;
3763 int i = 1;
3765 do {
3766 if (!(np->msi_flags & NV_MSI_X_ENABLED)) {
3767 events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
3768 writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
3769 } else {
3770 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
3771 writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus);
3774 mask = readl(base + NvRegIrqMask);
3775 if (events & mask) {
3777 spin_lock(&np->lock);
3778 nv_tx_done_optimized(dev, TX_WORK_PER_LOOP);
3779 spin_unlock(&np->lock);
3781 if (nv_rx_process_optimized(dev, RX_WORK_PER_LOOP)) {
3782 if (unlikely(nv_alloc_rx_optimized(dev))) {
3783 spin_lock(&np->lock);
3784 if (!np->in_shutdown)
3785 mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
3786 spin_unlock(&np->lock);
3789 if (unlikely(events & NVREG_IRQ_LINK)) {
3790 spin_lock(&np->lock);
3791 nv_link_irq(dev);
3792 spin_unlock(&np->lock);
3794 if (unlikely(np->need_linktimer && time_after(jiffies, np->link_timeout))) {
3795 spin_lock(&np->lock);
3796 nv_linkchange(dev);
3797 spin_unlock(&np->lock);
3798 np->link_timeout = jiffies + LINK_TIMEOUT;
3800 if (unlikely(events & NVREG_IRQ_RECOVER_ERROR)) {
3801 spin_lock(&np->lock);
3802 /* disable interrupts on the nic */
3803 if (!(np->msi_flags & NV_MSI_X_ENABLED))
3804 writel(0, base + NvRegIrqMask);
3805 else
3806 writel(np->irqmask, base + NvRegIrqMask);
3807 pci_push(base);
3809 if (!np->in_shutdown) {
3810 np->nic_poll_irq = np->irqmask;
3811 np->recover_error = 1;
3812 mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
3814 spin_unlock(&np->lock);
3815 break;
3817 } else
3818 break;
3820 while (i++ <= max_interrupt_work);
3822 return IRQ_RETVAL(i);
3825 #if NVVER < FEDORA7
3826 static irqreturn_t nv_nic_irq_tx(int foo, void *data, struct pt_regs *regs)
3827 #else
3828 static irqreturn_t nv_nic_irq_tx(int foo, void *data)
3829 #endif
3831 struct net_device *dev = (struct net_device *) data;
3832 struct fe_priv *np = get_nvpriv(dev);
3833 u8 __iomem *base = get_hwbase(dev);
3834 u32 events;
3835 int i;
3836 unsigned long flags;
3838 dprintk("%s:%s\n",dev->name,__FUNCTION__);
3840 for (i=0; ; i++) {
3841 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_TX_ALL;
3842 writel(NVREG_IRQ_TX_ALL, base + NvRegMSIXIrqStatus);
3843 dprintk(KERN_DEBUG "%s: tx irq: %08x\n", dev->name, events);
3844 if (!(events & np->irqmask))
3845 break;
3847 spin_lock_irqsave(&np->lock, flags);
3848 nv_tx_done_optimized(dev, TX_WORK_PER_LOOP);
3849 spin_unlock_irqrestore(&np->lock, flags);
3851 if (events & (NVREG_IRQ_TX_ERR)) {
3852 dprintk(KERN_DEBUG "%s: received irq with events 0x%x. Probably TX fail.\n",
3853 dev->name, events);
3855 if (i > max_interrupt_work) {
3856 spin_lock_irqsave(&np->lock, flags);
3857 /* disable interrupts on the nic */
3858 writel(NVREG_IRQ_TX_ALL, base + NvRegIrqMask);
3859 pci_push(base);
3861 if (!np->in_shutdown) {
3862 np->nic_poll_irq |= NVREG_IRQ_TX_ALL;
3863 mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
3865 printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_tx.\n", dev->name, i);
3866 spin_unlock_irqrestore(&np->lock, flags);
3867 break;
3871 dprintk(KERN_DEBUG "%s: nv_nic_irq_tx completed\n", dev->name);
3873 return IRQ_RETVAL(i);
3876 #if NVVER < FEDORA7
3877 static irqreturn_t nv_nic_irq_rx(int foo, void *data, struct pt_regs *regs)
3878 #else
3879 static irqreturn_t nv_nic_irq_rx(int foo, void *data)
3880 #endif
3882 struct net_device *dev = (struct net_device *) data;
3883 struct fe_priv *np = get_nvpriv(dev);
3884 u8 __iomem *base = get_hwbase(dev);
3885 u32 events;
3886 int i;
3887 unsigned long flags;
3889 dprintk("%s:%s\n",dev->name,__FUNCTION__);
3891 for (i=0; ; i++) {
3892 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_RX_ALL;
3893 writel(NVREG_IRQ_RX_ALL, base + NvRegMSIXIrqStatus);
3894 dprintk(KERN_DEBUG "%s: rx irq: %08x\n", dev->name, events);
3895 if (!(events & np->irqmask))
3896 break;
3898 if (nv_rx_process_optimized(dev, RX_WORK_PER_LOOP)) {
3899 if (unlikely(nv_alloc_rx_optimized(dev))) {
3900 spin_lock_irqsave(&np->lock, flags);
3901 if (!np->in_shutdown)
3902 mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
3903 spin_unlock_irqrestore(&np->lock, flags);
3907 if (i > max_interrupt_work) {
3908 spin_lock_irqsave(&np->lock, flags);
3909 /* disable interrupts on the nic */
3910 writel(NVREG_IRQ_RX_ALL, base + NvRegIrqMask);
3911 pci_push(base);
3913 if (!np->in_shutdown) {
3914 np->nic_poll_irq |= NVREG_IRQ_RX_ALL;
3915 mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
3917 printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_rx.\n", dev->name, i);
3918 spin_unlock_irqrestore(&np->lock, flags);
3919 break;
3923 dprintk(KERN_DEBUG "%s: nv_nic_irq_rx completed\n", dev->name);
3925 return IRQ_RETVAL(i);
3928 #if NVVER < FEDORA7
3929 static irqreturn_t nv_nic_irq_other(int foo, void *data, struct pt_regs *regs)
3930 #else
3931 static irqreturn_t nv_nic_irq_other(int foo, void *data)
3932 #endif
3934 struct net_device *dev = (struct net_device *) data;
3935 struct fe_priv *np = get_nvpriv(dev);
3936 u8 __iomem *base = get_hwbase(dev);
3937 u32 events;
3938 int i;
3939 unsigned long flags;
3941 dprintk("%s:%s\n",dev->name,__FUNCTION__);
3943 for (i=0; ; i++) {
3944 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQ_OTHER;
3945 writel(NVREG_IRQ_OTHER, base + NvRegMSIXIrqStatus);
3946 dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
3947 if (!(events & np->irqmask))
3948 break;
3950 /* check tx in case we reached max loop limit in tx isr */
3951 spin_lock_irqsave(&np->lock, flags);
3952 nv_tx_done_optimized(dev, TX_WORK_PER_LOOP);
3953 spin_unlock_irqrestore(&np->lock, flags);
3955 if (events & NVREG_IRQ_LINK) {
3956 spin_lock_irq(&np->lock);
3957 nv_link_irq(dev);
3958 spin_unlock_irq(&np->lock);
3960 if (np->need_linktimer && time_after(jiffies, np->link_timeout)) {
3961 spin_lock_irq(&np->lock);
3962 nv_linkchange(dev);
3963 spin_unlock_irq(&np->lock);
3964 np->link_timeout = jiffies + LINK_TIMEOUT;
3966 if (events & NVREG_IRQ_RECOVER_ERROR) {
3967 spin_lock_irq(&np->lock);
3968 /* disable interrupts on the nic */
3969 writel(NVREG_IRQ_OTHER, base + NvRegIrqMask);
3970 pci_push(base);
3972 if (!np->in_shutdown) {
3973 np->nic_poll_irq |= NVREG_IRQ_OTHER;
3974 np->recover_error = 1;
3975 mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
3977 spin_unlock_irq(&np->lock);
3978 break;
3980 if (events & (NVREG_IRQ_UNKNOWN)) {
3981 printk(KERN_DEBUG "%s: received irq with unknown events 0x%x. Please report\n",
3982 dev->name, events);
3984 if (i > max_interrupt_work) {
3985 spin_lock_irq(&np->lock);
3986 /* disable interrupts on the nic */
3987 writel(NVREG_IRQ_OTHER, base + NvRegIrqMask);
3988 pci_push(base);
3990 if (!np->in_shutdown) {
3991 np->nic_poll_irq |= NVREG_IRQ_OTHER;
3992 mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
3994 printk(KERN_DEBUG "%s: too many iterations (%d) in nv_nic_irq_other.\n", dev->name, i);
3995 spin_unlock_irq(&np->lock);
3996 break;
4000 dprintk(KERN_DEBUG "%s: nv_nic_irq_other completed\n", dev->name);
4002 return IRQ_RETVAL(i);
4005 #if NVVER < FEDORA7
4006 static irqreturn_t nv_nic_irq_test(int foo, void *data, struct pt_regs *regs)
4007 #else
4008 static irqreturn_t nv_nic_irq_test(int foo, void *data)
4009 #endif
4011 struct net_device *dev = (struct net_device *) data;
4012 struct fe_priv *np = get_nvpriv(dev);
4013 u8 __iomem *base = get_hwbase(dev);
4014 u32 events;
4016 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
4018 if (!(np->msi_flags & NV_MSI_X_ENABLED)) {
4019 events = readl(base + NvRegIrqStatus) & NVREG_IRQSTAT_MASK;
4020 writel(NVREG_IRQ_TIMER, base + NvRegIrqStatus);
4021 } else {
4022 events = readl(base + NvRegMSIXIrqStatus) & NVREG_IRQSTAT_MASK;
4023 writel(NVREG_IRQ_TIMER, base + NvRegMSIXIrqStatus);
4025 pci_push(base);
4026 dprintk(KERN_DEBUG "%s: irq: %08x\n", dev->name, events);
4027 if (!(events & NVREG_IRQ_TIMER))
4028 return IRQ_RETVAL(0);
4030 spin_lock(&np->lock);
4031 np->intr_test = 1;
4032 spin_unlock(&np->lock);
4034 dprintk(KERN_DEBUG "%s: nv_nic_irq_test completed\n", dev->name);
4036 return IRQ_RETVAL(1);
4039 #ifdef CONFIG_PCI_MSI
4040 static void set_msix_vector_map(struct net_device *dev, u32 vector, u32 irqmask)
4042 u8 __iomem *base = get_hwbase(dev);
4043 int i;
4044 u32 msixmap = 0;
4046 /* Each interrupt bit can be mapped to a MSIX vector (4 bits).
4047 * MSIXMap0 represents the first 8 interrupts and MSIXMap1 represents
4048 * the remaining 8 interrupts.
4049 */
4050 for (i = 0; i < 8; i++) {
4051 if ((irqmask >> i) & 0x1) {
4052 msixmap |= vector << (i << 2);
4055 writel(readl(base + NvRegMSIXMap0) | msixmap, base + NvRegMSIXMap0);
4057 msixmap = 0;
4058 for (i = 0; i < 8; i++) {
4059 if ((irqmask >> (i + 8)) & 0x1) {
4060 msixmap |= vector << (i << 2);
4063 writel(readl(base + NvRegMSIXMap1) | msixmap, base + NvRegMSIXMap1);
4065 #endif
4067 static int nv_request_irq(struct net_device *dev, int intr_test)
4069 struct fe_priv *np = get_nvpriv(dev);
4070 int ret = 1;
4072 #if NVVER > SLES9
4073 u8 __iomem *base = get_hwbase(dev);
4074 int i;
4076 if (np->msi_flags & NV_MSI_X_CAPABLE) {
4077 for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) {
4078 np->msi_x_entry[i].entry = i;
4080 if ((ret = pci_enable_msix(np->pci_dev, np->msi_x_entry, (np->msi_flags & NV_MSI_X_VECTORS_MASK))) == 0) {
4081 np->msi_flags |= NV_MSI_X_ENABLED;
4082 if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT && !intr_test) {
4083 /* Request irq for rx handling */
4084 if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, &nv_nic_irq_rx, IRQ_FLAG, dev->name, dev) != 0) {
4085 printk(KERN_INFO "forcedeth: request_irq failed for rx %d\n", ret);
4086 pci_disable_msix(np->pci_dev);
4087 np->msi_flags &= ~NV_MSI_X_ENABLED;
4088 goto out_err;
4090 /* Request irq for tx handling */
4091 if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, &nv_nic_irq_tx, IRQ_FLAG, dev->name, dev) != 0) {
4092 printk(KERN_INFO "forcedeth: request_irq failed for tx %d\n", ret);
4093 pci_disable_msix(np->pci_dev);
4094 np->msi_flags &= ~NV_MSI_X_ENABLED;
4095 goto out_free_rx;
4097 /* Request irq for link and timer handling */
4098 if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector, &nv_nic_irq_other, IRQ_FLAG, dev->name, dev) != 0) {
4099 printk(KERN_INFO "forcedeth: request_irq failed for link %d\n", ret);
4100 pci_disable_msix(np->pci_dev);
4101 np->msi_flags &= ~NV_MSI_X_ENABLED;
4102 goto out_free_tx;
4104 /* map interrupts to their respective vector */
4105 writel(0, base + NvRegMSIXMap0);
4106 writel(0, base + NvRegMSIXMap1);
4107 #ifdef CONFIG_PCI_MSI
4108 set_msix_vector_map(dev, NV_MSI_X_VECTOR_RX, NVREG_IRQ_RX_ALL);
4109 set_msix_vector_map(dev, NV_MSI_X_VECTOR_TX, NVREG_IRQ_TX_ALL);
4110 set_msix_vector_map(dev, NV_MSI_X_VECTOR_OTHER, NVREG_IRQ_OTHER);
4111 #endif
4112 } else {
4113 /* Request irq for all interrupts */
4114 if ((!intr_test && np->desc_ver == DESC_VER_3 &&
4115 request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq_optimized, IRQ_FLAG, dev->name, dev) != 0) ||
4116 (!intr_test && np->desc_ver != DESC_VER_3 &&
4117 request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq, IRQ_FLAG, dev->name, dev) != 0) ||
4118 (intr_test &&
4119 request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq_test, IRQ_FLAG, dev->name, dev) != 0)) {
4120 printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
4121 pci_disable_msix(np->pci_dev);
4122 np->msi_flags &= ~NV_MSI_X_ENABLED;
4123 goto out_err;
4126 /* map interrupts to vector 0 */
4127 writel(0, base + NvRegMSIXMap0);
4128 writel(0, base + NvRegMSIXMap1);
4132 if (ret != 0 && np->msi_flags & NV_MSI_CAPABLE) {
4133 if ((ret = pci_enable_msi(np->pci_dev)) == 0) {
4134 np->msi_flags |= NV_MSI_ENABLED;
4135 dev->irq = np->pci_dev->irq;
4136 if ((!intr_test && np->desc_ver == DESC_VER_3 &&
4137 request_irq(np->pci_dev->irq, &nv_nic_irq_optimized, IRQ_FLAG, dev->name, dev) != 0) ||
4138 (!intr_test && np->desc_ver != DESC_VER_3 &&
4139 request_irq(np->pci_dev->irq, &nv_nic_irq, IRQ_FLAG, dev->name, dev) != 0) ||
4140 (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, IRQ_FLAG, dev->name, dev) != 0)) {
4141 printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
4142 pci_disable_msi(np->pci_dev);
4143 np->msi_flags &= ~NV_MSI_ENABLED;
4144 dev->irq = np->pci_dev->irq;
4145 goto out_err;
4148 /* map interrupts to vector 0 */
4149 writel(0, base + NvRegMSIMap0);
4150 writel(0, base + NvRegMSIMap1);
4151 /* enable msi vector 0 */
4152 writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask);
4155 #else
4156 #ifdef CONFIG_PCI_MSI
4157 u8 __iomem *base = get_hwbase(dev);
4158 int i;
4160 if (np->msi_flags & NV_MSI_X_CAPABLE) {
4161 for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) {
4162 np->msi_x_entry[i].entry = i;
4164 if ((ret = pci_enable_msi(np->pci_dev)) == 0) {
4165 np->msi_flags |= NV_MSI_X_ENABLED;
4166 if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT && !intr_test) {
4167 msi_alloc_vectors(np->pci_dev,(int *)np->msi_x_entry,2);
4168 /* Request irq for rx handling */
4169 if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, &nv_nic_irq_rx, IRQ_FLAG, dev->name, dev) != 0) {
4170 printk(KERN_INFO "forcedeth: request_irq failed for rx %d\n", ret);
4171 pci_disable_msi(np->pci_dev);
4172 np->msi_flags &= ~NV_MSI_X_ENABLED;
4173 goto out_err;
4175 /* Request irq for tx handling */
4176 if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, &nv_nic_irq_tx, IRQ_FLAG, dev->name, dev) != 0) {
4177 printk(KERN_INFO "forcedeth: request_irq failed for tx %d\n", ret);
4178 pci_disable_msi(np->pci_dev);
4179 np->msi_flags &= ~NV_MSI_X_ENABLED;
4180 goto out_free_rx;
4182 /* Request irq for link and timer handling */
4183 if (request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector, &nv_nic_irq_other, IRQ_FLAG, dev->name, dev) != 0) {
4184 printk(KERN_INFO "forcedeth: request_irq failed for link %d\n", ret);
4185 pci_disable_msi(np->pci_dev);
4186 np->msi_flags &= ~NV_MSI_X_ENABLED;
4187 goto out_free_tx;
4189 /* map interrupts to their respective vector */
4190 writel(0, base + NvRegMSIXMap0);
4191 writel(0, base + NvRegMSIXMap1);
4192 #ifdef CONFIG_PCI_MSI
4193 set_msix_vector_map(dev, NV_MSI_X_VECTOR_RX, NVREG_IRQ_RX_ALL);
4194 set_msix_vector_map(dev, NV_MSI_X_VECTOR_TX, NVREG_IRQ_TX_ALL);
4195 set_msix_vector_map(dev, NV_MSI_X_VECTOR_OTHER, NVREG_IRQ_OTHER);
4196 #endif
4197 } else {
4198 /* Request irq for all interrupts */
4199 if ((!intr_test && np->desc_ver == DESC_VER_3 &&
4200 request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq_optimized, IRQ_FLAG, dev->name, dev) != 0) ||
4201 (!intr_test && np->desc_ver != DESC_VER_3 &&
4202 request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq, IRQ_FLAG, dev->name, dev) != 0) ||
4203 (intr_test &&
4204 request_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector, &nv_nic_irq_test, IRQ_FLAG, dev->name, dev) != 0)) {
4205 printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
4206 pci_disable_msi(np->pci_dev);
4207 np->msi_flags &= ~NV_MSI_X_ENABLED;
4208 goto out_err;
4211 /* map interrupts to vector 0 */
4212 writel(0, base + NvRegMSIXMap0);
4213 writel(0, base + NvRegMSIXMap1);
4217 if (ret != 0 && np->msi_flags & NV_MSI_CAPABLE) {
4219 if ((ret = pci_enable_msi(np->pci_dev)) == 0) {
4220 np->msi_flags |= NV_MSI_ENABLED;
4221 dev->irq = np->pci_dev->irq;
4222 if ((!intr_test && np->desc_ver == DESC_VER_3 &&
4223 request_irq(np->pci_dev->irq, &nv_nic_irq_optimized, IRQ_FLAG, dev->name, dev) != 0) ||
4224 (!intr_test && np->desc_ver != DESC_VER_3 &&
4225 request_irq(np->pci_dev->irq, &nv_nic_irq, IRQ_FLAG, dev->name, dev) != 0) ||
4226 (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, IRQ_FLAG, dev->name, dev) != 0)) {
4227 printk(KERN_INFO "forcedeth: request_irq failed %d\n", ret);
4228 pci_disable_msi(np->pci_dev);
4229 np->msi_flags &= ~NV_MSI_ENABLED;
4230 dev->irq = np->pci_dev->irq;
4231 goto out_err;
4234 /* map interrupts to vector 0 */
4235 writel(0, base + NvRegMSIMap0);
4236 writel(0, base + NvRegMSIMap1);
4237 /* enable msi vector 0 */
4238 writel(NVREG_MSI_VECTOR_0_ENABLED, base + NvRegMSIIrqMask);
4241 #endif
4242 #endif
4243 if (ret != 0) {
4244 if ((!intr_test && np->desc_ver == DESC_VER_3 &&
4245 request_irq(np->pci_dev->irq, &nv_nic_irq_optimized, IRQ_FLAG, dev->name, dev) != 0) ||
4246 (!intr_test && np->desc_ver != DESC_VER_3 &&
4247 request_irq(np->pci_dev->irq, &nv_nic_irq, IRQ_FLAG, dev->name, dev) != 0) ||
4248 (intr_test && request_irq(np->pci_dev->irq, &nv_nic_irq_test, IRQ_FLAG, dev->name, dev) != 0))
4249 goto out_err;
4253 return 0;
4255 #if NVVER > SLES9
4256 out_free_tx:
4257 free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, dev);
4258 out_free_rx:
4259 free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, dev);
4260 #else
4261 #ifdef CONFIG_PCI_MSI
4262 out_free_tx:
4263 free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector, dev);
4264 out_free_rx:
4265 free_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector, dev);
4266 #endif
4267 #endif
4268 out_err:
4269 return 1;
4272 #if NVVER > SLES9
4273 static void nv_free_irq(struct net_device *dev)
4275 struct fe_priv *np = get_nvpriv(dev);
4276 int i;
4278 if (np->msi_flags & NV_MSI_X_ENABLED) {
4279 for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) {
4280 free_irq(np->msi_x_entry[i].vector, dev);
4282 pci_disable_msix(np->pci_dev);
4283 np->msi_flags &= ~NV_MSI_X_ENABLED;
4284 } else {
4285 free_irq(np->pci_dev->irq, dev);
4286 if (np->msi_flags & NV_MSI_ENABLED) {
4287 pci_disable_msi(np->pci_dev);
4288 np->msi_flags &= ~NV_MSI_ENABLED;
4292 #else
4293 static void nv_free_irq(struct net_device *dev)
4295 struct fe_priv *np = get_nvpriv(dev);
4297 #ifdef CONFIG_PCI_MSI
4298 int i;
4300 if (np->msi_flags & NV_MSI_X_ENABLED) {
4301 for (i = 0; i < (np->msi_flags & NV_MSI_X_VECTORS_MASK); i++) {
4302 free_irq(np->msi_x_entry[i].vector, dev);
4304 pci_disable_msi(np->pci_dev);
4305 np->msi_flags &= ~NV_MSI_X_ENABLED;
4306 } else {
4307 free_irq(np->pci_dev->irq, dev);
4309 if (np->msi_flags & NV_MSI_ENABLED) {
4310 pci_disable_msi(np->pci_dev);
4311 np->msi_flags &= ~NV_MSI_ENABLED;
4314 #else
4315 free_irq(np->pci_dev->irq, dev);
4316 #endif
4319 #endif
4321 static void nv_do_nic_poll(unsigned long data)
4323 struct net_device *dev = (struct net_device *) data;
4324 struct fe_priv *np = get_nvpriv(dev);
4325 u8 __iomem *base = get_hwbase(dev);
4326 u32 mask = 0;
4328 /*
4329 * First disable irq(s) and then
4330 * reenable interrupts on the nic, we have to do this before calling
4331 * nv_nic_irq because that may decide to do otherwise
4332 */
4334 spin_lock_irq(&np->timer_lock);
4335 if (!using_multi_irqs(dev)) {
4336 if (np->msi_flags & NV_MSI_X_ENABLED)
4337 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
4338 else
4339 disable_irq(np->pci_dev->irq);
4340 mask = np->irqmask;
4341 } else {
4342 if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
4343 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
4344 mask |= NVREG_IRQ_RX_ALL;
4346 if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
4347 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
4348 mask |= NVREG_IRQ_TX_ALL;
4350 if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
4351 disable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
4352 mask |= NVREG_IRQ_OTHER;
4355 np->nic_poll_irq = 0;
4357 /* disable_irq() contains synchronize_irq,thus no irq handler can run now */
4359 if (np->recover_error) {
4360 np->recover_error = 0;
4361 printk(KERN_INFO "forcedeth: MAC in recoverable error state\n");
4362 if (netif_running(dev)) {
4363 #if NVVER > FEDORA5
4364 netif_tx_lock_bh(dev);
4365 #else
4366 spin_lock_bh(&dev->xmit_lock);
4367 #endif
4368 spin_lock(&np->lock);
4369 /* stop engines */
4370 nv_stop_rx(dev);
4371 nv_stop_tx(dev);
4372 nv_txrx_reset(dev);
4373 /* drain rx queue */
4374 nv_drain_rx(dev);
4375 nv_drain_tx(dev);
4376 /* reinit driver view of the rx queue */
4377 set_bufsize(dev);
4378 if (nv_init_ring(dev)) {
4379 if (!np->in_shutdown)
4380 mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
4382 /* reinit nic view of the rx queue */
4383 writel(np->rx_buf_sz, base + NvRegOffloadConfig);
4384 setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
4385 writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
4386 base + NvRegRingSizes);
4387 pci_push(base);
4388 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
4389 pci_push(base);
4391 /* restart rx engine */
4392 nv_start_rx(dev);
4393 nv_start_tx(dev);
4394 spin_unlock(&np->lock);
4395 #if NVVER > FEDORA5
4396 netif_tx_unlock_bh(dev);
4397 #else
4398 spin_unlock_bh(&dev->xmit_lock);
4399 #endif
4403 writel(mask, base + NvRegIrqMask);
4404 pci_push(base);
4406 if (!using_multi_irqs(dev)) {
4407 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
4408 #if NVVER < FEDORA7
4409 nv_nic_irq((int) 0, (void *) data, (struct pt_regs *) NULL);
4410 #else
4411 nv_nic_irq((int) 0, (void *) data);
4412 #endif
4413 else
4414 #if NVVER < FEDORA7
4415 nv_nic_irq_optimized((int) 0, (void *) data, (struct pt_regs *) NULL);
4416 #else
4417 nv_nic_irq_optimized((int) 0, (void *) data);
4418 #endif
4419 if (np->msi_flags & NV_MSI_X_ENABLED)
4420 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_ALL].vector);
4421 else
4422 enable_irq(np->pci_dev->irq);
4423 } else {
4424 if (np->nic_poll_irq & NVREG_IRQ_RX_ALL) {
4425 #if NVVER < FEDORA7
4426 nv_nic_irq_rx((int) 0, (void *) data, (struct pt_regs *) NULL);
4427 #else
4428 nv_nic_irq_rx((int) 0, (void *) data);
4429 #endif
4430 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_RX].vector);
4432 if (np->nic_poll_irq & NVREG_IRQ_TX_ALL) {
4433 #if NVVER < FEDORA7
4434 nv_nic_irq_tx((int) 0, (void *) data, (struct pt_regs *) NULL);
4435 #else
4436 nv_nic_irq_tx((int) 0, (void *) data);
4437 #endif
4438 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_TX].vector);
4440 if (np->nic_poll_irq & NVREG_IRQ_OTHER) {
4441 #if NVVER < FEDORA7
4442 nv_nic_irq_other((int) 0, (void *) data, (struct pt_regs *) NULL);
4443 #else
4444 nv_nic_irq_other((int) 0, (void *) data);
4445 #endif
4446 enable_irq(np->msi_x_entry[NV_MSI_X_VECTOR_OTHER].vector);
4449 spin_unlock_irq(&np->timer_lock);
4452 #if NVVER > RHES3
4453 #ifdef CONFIG_NET_POLL_CONTROLLER
4454 static void nv_poll_controller(struct net_device *dev)
4456 nv_do_nic_poll((unsigned long) dev);
4458 #endif
4459 #else
4460 static void nv_poll_controller(struct net_device *dev)
4462 nv_do_nic_poll((unsigned long) dev);
4464 #endif
4466 static void nv_do_stats_poll(unsigned long data)
4468 struct net_device *dev = (struct net_device *) data;
4469 struct fe_priv *np = get_nvpriv(dev);
4470 u8 __iomem *base = get_hwbase(dev);
4472 spin_lock_irq(&np->lock);
4474 np->estats.tx_dropped = np->stats.tx_dropped;
4475 if (np->driver_data & (DEV_HAS_STATISTICS_V1|DEV_HAS_STATISTICS_V2)) {
4476 np->estats.tx_fifo_errors += readl(base + NvRegTxUnderflow);
4477 np->estats.tx_carrier_errors += readl(base + NvRegTxLossCarrier);
4478 np->estats.tx_bytes += readl(base + NvRegTxCnt);
4479 np->estats.rx_crc_errors += readl(base + NvRegRxFCSErr);
4480 np->estats.rx_over_errors += readl(base + NvRegRxOverflow);
4481 np->estats.tx_zero_rexmt += readl(base + NvRegTxZeroReXmt);
4482 np->estats.tx_one_rexmt += readl(base + NvRegTxOneReXmt);
4483 np->estats.tx_many_rexmt += readl(base + NvRegTxManyReXmt);
4484 np->estats.tx_late_collision += readl(base + NvRegTxLateCol);
4485 np->estats.tx_excess_deferral += readl(base + NvRegTxExcessDef);
4486 np->estats.tx_retry_error += readl(base + NvRegTxRetryErr);
4487 np->estats.rx_frame_error += readl(base + NvRegRxFrameErr);
4488 np->estats.rx_extra_byte += readl(base + NvRegRxExtraByte);
4489 np->estats.rx_late_collision += readl(base + NvRegRxLateCol);
4490 np->estats.rx_runt += readl(base + NvRegRxRunt);
4491 np->estats.rx_frame_too_long += readl(base + NvRegRxFrameTooLong);
4492 np->estats.rx_frame_align_error += readl(base + NvRegRxFrameAlignErr);
4493 np->estats.rx_length_error += readl(base + NvRegRxLenErr);
4494 np->estats.rx_unicast += readl(base + NvRegRxUnicast);
4495 np->estats.rx_multicast += readl(base + NvRegRxMulticast);
4496 np->estats.rx_broadcast += readl(base + NvRegRxBroadcast);
4497 np->estats.rx_packets =
4498 np->estats.rx_unicast +
4499 np->estats.rx_multicast +
4500 np->estats.rx_broadcast;
4501 np->estats.rx_errors_total =
4502 np->estats.rx_crc_errors +
4503 np->estats.rx_over_errors +
4504 np->estats.rx_frame_error +
4505 (np->estats.rx_frame_align_error - np->estats.rx_extra_byte) +
4506 np->estats.rx_late_collision +
4507 np->estats.rx_runt +
4508 np->estats.rx_frame_too_long +
4509 np->rx_len_errors;
4511 if (np->driver_data & DEV_HAS_STATISTICS_V2) {
4512 np->estats.tx_deferral += readl(base + NvRegTxDef);
4513 np->estats.tx_packets += readl(base + NvRegTxFrame);
4514 np->estats.rx_bytes += readl(base + NvRegRxCnt);
4515 np->estats.tx_pause += readl(base + NvRegTxPause);
4516 np->estats.rx_pause += readl(base + NvRegRxPause);
4517 np->estats.rx_drop_frame += readl(base + NvRegRxDropFrame);
4520 /* copy to net_device stats */
4521 np->stats.tx_fifo_errors = np->estats.tx_fifo_errors;
4522 np->stats.tx_carrier_errors = np->estats.tx_carrier_errors;
4523 np->stats.tx_bytes = np->estats.tx_bytes;
4524 np->stats.rx_crc_errors = np->estats.rx_crc_errors;
4525 np->stats.rx_over_errors = np->estats.rx_over_errors;
4526 np->stats.rx_packets = np->estats.rx_packets;
4527 np->stats.rx_errors = np->estats.rx_errors_total;
4529 } else {
4530 np->estats.tx_packets = np->stats.tx_packets;
4531 np->estats.tx_fifo_errors = np->stats.tx_fifo_errors;
4532 np->estats.tx_carrier_errors = np->stats.tx_carrier_errors;
4533 np->estats.tx_bytes = np->stats.tx_bytes;
4534 np->estats.rx_bytes = np->stats.rx_bytes;
4535 np->estats.rx_crc_errors = np->stats.rx_crc_errors;
4536 np->estats.rx_over_errors = np->stats.rx_over_errors;
4537 np->estats.rx_packets = np->stats.rx_packets;
4538 np->estats.rx_errors_total = np->stats.rx_errors;
4541 if (!np->in_shutdown && netif_running(dev))
4542 mod_timer(&np->stats_poll, jiffies + STATS_INTERVAL);
4543 spin_unlock_irq(&np->lock);
4546 /*
4547 * nv_get_stats: dev->get_stats function
4548 * Get latest stats value from the nic.
4549 * Called with read_lock(&dev_base_lock) held for read -
4550 * only synchronized against unregister_netdevice.
4551 */
4552 static struct net_device_stats *nv_get_stats(struct net_device *dev)
4554 struct fe_priv *np = get_nvpriv(dev);
4556 /* It seems that the nic always generates interrupts and doesn't
4557 * accumulate errors internally. Thus the current values in np->stats
4558 * are already up to date.
4559 */
4560 nv_do_stats_poll((unsigned long)dev);
4561 return &np->stats;
4564 static void nv_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
4566 struct fe_priv *np = get_nvpriv(dev);
4567 strcpy(info->driver, "forcedeth");
4568 strcpy(info->version, FORCEDETH_VERSION);
4569 strcpy(info->bus_info, pci_name(np->pci_dev));
4572 static void nv_get_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo)
4574 struct fe_priv *np = get_nvpriv(dev);
4575 wolinfo->supported = WAKE_MAGIC;
4577 spin_lock_irq(&np->lock);
4578 if (np->wolenabled)
4579 wolinfo->wolopts = WAKE_MAGIC;
4580 spin_unlock_irq(&np->lock);
4583 static int nv_set_wol(struct net_device *dev, struct ethtool_wolinfo *wolinfo)
4585 struct fe_priv *np = get_nvpriv(dev);
4586 u8 __iomem *base = get_hwbase(dev);
4587 u32 flags = 0;
4589 if (wolinfo->wolopts == 0) {
4590 np->wolenabled = 0;
4591 } else if (wolinfo->wolopts & WAKE_MAGIC) {
4592 np->wolenabled = 1;
4593 flags = NVREG_WAKEUPFLAGS_ENABLE;
4595 if (netif_running(dev)) {
4596 spin_lock_irq(&np->lock);
4597 writel(flags, base + NvRegWakeUpFlags);
4598 spin_unlock_irq(&np->lock);
4600 return 0;
4603 static int nv_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
4605 struct fe_priv *np = get_nvpriv(dev);
4606 int adv;
4608 spin_lock_irq(&np->lock);
4609 ecmd->port = PORT_MII;
4610 if (!netif_running(dev)) {
4611 /* We do not track link speed / duplex setting if the
4612 * interface is disabled. Force a link check */
4613 if (nv_update_linkspeed(dev)) {
4614 if (!netif_carrier_ok(dev))
4615 netif_carrier_on(dev);
4616 } else {
4617 if (netif_carrier_ok(dev))
4618 netif_carrier_off(dev);
4622 if (netif_carrier_ok(dev)) {
4623 switch(np->linkspeed & (NVREG_LINKSPEED_MASK)) {
4624 case NVREG_LINKSPEED_10:
4625 ecmd->speed = SPEED_10;
4626 break;
4627 case NVREG_LINKSPEED_100:
4628 ecmd->speed = SPEED_100;
4629 break;
4630 case NVREG_LINKSPEED_1000:
4631 ecmd->speed = SPEED_1000;
4632 break;
4634 ecmd->duplex = DUPLEX_HALF;
4635 if (np->duplex)
4636 ecmd->duplex = DUPLEX_FULL;
4637 } else {
4638 ecmd->speed = -1;
4639 ecmd->duplex = -1;
4642 ecmd->autoneg = np->autoneg;
4644 ecmd->advertising = ADVERTISED_MII;
4645 if (np->autoneg) {
4646 ecmd->advertising |= ADVERTISED_Autoneg;
4647 adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
4648 if (adv & ADVERTISE_10HALF)
4649 ecmd->advertising |= ADVERTISED_10baseT_Half;
4650 if (adv & ADVERTISE_10FULL)
4651 ecmd->advertising |= ADVERTISED_10baseT_Full;
4652 if (adv & ADVERTISE_100HALF)
4653 ecmd->advertising |= ADVERTISED_100baseT_Half;
4654 if (adv & ADVERTISE_100FULL)
4655 ecmd->advertising |= ADVERTISED_100baseT_Full;
4656 if (np->gigabit == PHY_GIGABIT) {
4657 adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
4658 if (adv & ADVERTISE_1000FULL)
4659 ecmd->advertising |= ADVERTISED_1000baseT_Full;
4662 ecmd->supported = (SUPPORTED_Autoneg |
4663 SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full |
4664 SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full |
4665 SUPPORTED_MII);
4666 if (np->gigabit == PHY_GIGABIT)
4667 ecmd->supported |= SUPPORTED_1000baseT_Full;
4669 ecmd->phy_address = np->phyaddr;
4670 ecmd->transceiver = XCVR_EXTERNAL;
4672 /* ignore maxtxpkt, maxrxpkt for now */
4673 spin_unlock_irq(&np->lock);
4674 return 0;
4677 static int nv_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
4679 struct fe_priv *np = get_nvpriv(dev);
4681 dprintk(KERN_DEBUG "%s: nv_set_settings \n", dev->name);
4682 if (ecmd->port != PORT_MII)
4683 return -EINVAL;
4684 if (ecmd->transceiver != XCVR_EXTERNAL)
4685 return -EINVAL;
4686 if (ecmd->phy_address != np->phyaddr) {
4687 /* TODO: support switching between multiple phys. Should be
4688 * trivial, but not enabled due to lack of test hardware. */
4689 return -EINVAL;
4691 if (ecmd->autoneg == AUTONEG_ENABLE) {
4692 u32 mask;
4694 mask = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full |
4695 ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full;
4696 if (np->gigabit == PHY_GIGABIT)
4697 mask |= ADVERTISED_1000baseT_Full;
4699 if ((ecmd->advertising & mask) == 0)
4700 return -EINVAL;
4702 } else if (ecmd->autoneg == AUTONEG_DISABLE) {
4703 /* Note: autonegotiation disable, speed 1000 intentionally
4704 * forbidden - noone should need that. */
4706 if (ecmd->speed != SPEED_10 && ecmd->speed != SPEED_100)
4707 return -EINVAL;
4708 if (ecmd->duplex != DUPLEX_HALF && ecmd->duplex != DUPLEX_FULL)
4709 return -EINVAL;
4710 } else {
4711 return -EINVAL;
4714 netif_carrier_off(dev);
4715 if (netif_running(dev)) {
4716 nv_disable_hw_interrupts(dev, np->irqmask);
4717 #if NVVER > RHES3
4718 synchronize_irq(np->pci_dev->irq);
4719 #else
4720 synchronize_irq();
4721 #endif
4722 #if NVVER > FEDORA5
4723 netif_tx_lock_bh(dev);
4724 #else
4725 spin_lock_bh(&dev->xmit_lock);
4726 #endif
4727 spin_lock(&np->lock);
4728 /* stop engines */
4729 nv_stop_rx(dev);
4730 nv_stop_tx(dev);
4731 spin_unlock(&np->lock);
4732 #if NVVER > FEDORA5
4733 netif_tx_unlock_bh(dev);
4734 #else
4735 spin_unlock_bh(&dev->xmit_lock);
4736 #endif
4739 if (ecmd->autoneg == AUTONEG_ENABLE) {
4740 int adv, bmcr;
4742 np->autoneg = 1;
4744 /* advertise only what has been requested */
4745 adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
4746 adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
4747 if (ecmd->advertising & ADVERTISED_10baseT_Half) {
4748 adv |= ADVERTISE_10HALF;
4749 np->speed_duplex = NV_SPEED_DUPLEX_10_HALF_DUPLEX;
4751 if (ecmd->advertising & ADVERTISED_10baseT_Full) {
4752 adv |= ADVERTISE_10FULL;
4753 np->speed_duplex = NV_SPEED_DUPLEX_10_FULL_DUPLEX;
4755 if (ecmd->advertising & ADVERTISED_100baseT_Half) {
4756 adv |= ADVERTISE_100HALF;
4757 np->speed_duplex = NV_SPEED_DUPLEX_100_HALF_DUPLEX;
4759 if (ecmd->advertising & ADVERTISED_100baseT_Full) {
4760 adv |= ADVERTISE_100FULL;
4761 np->speed_duplex = NV_SPEED_DUPLEX_100_FULL_DUPLEX;
4763 if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) /* for rx we set both advertisments but disable tx pause */
4764 adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
4765 if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
4766 adv |= ADVERTISE_PAUSE_ASYM;
4767 mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
4769 if (np->gigabit == PHY_GIGABIT) {
4770 adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
4771 adv &= ~ADVERTISE_1000FULL;
4772 if (ecmd->advertising & ADVERTISED_1000baseT_Full) {
4773 adv |= ADVERTISE_1000FULL;
4774 np->speed_duplex = NV_SPEED_DUPLEX_1000_FULL_DUPLEX;
4776 mii_rw(dev, np->phyaddr, MII_CTRL1000, adv);
4778 if (ecmd->advertising & (ADVERTISED_10baseT_Half|ADVERTISED_10baseT_Full|ADVERTISED_100baseT_Half|ADVERTISED_100baseT_Full|ADVERTISED_1000baseT_Full))
4779 np->speed_duplex = NV_SPEED_DUPLEX_AUTO;
4780 } else {
4781 if (ecmd->advertising & (ADVERTISED_10baseT_Half|ADVERTISED_10baseT_Full|ADVERTISED_100baseT_Half|ADVERTISED_100baseT_Full))
4782 np->speed_duplex = NV_SPEED_DUPLEX_AUTO;
4785 if (netif_running(dev))
4786 printk(KERN_INFO "%s: link down.\n", dev->name);
4787 bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
4788 if (np->phy_model == PHY_MODEL_MARVELL_E3016) {
4789 bmcr |= BMCR_ANENABLE;
4790 /* reset the phy in order for settings to stick,
4791 * and cause autoneg to start */
4792 if (phy_reset(dev, bmcr)) {
4793 printk(KERN_INFO "%s: phy reset failed\n", dev->name);
4794 return -EINVAL;
4796 } else {
4797 bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
4798 mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
4800 } else {
4801 int adv, bmcr;
4803 np->autoneg = 0;
4805 adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
4806 adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4 | ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
4807 if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_HALF) {
4808 adv |= ADVERTISE_10HALF;
4809 np->speed_duplex = NV_SPEED_DUPLEX_10_HALF_DUPLEX;
4811 if (ecmd->speed == SPEED_10 && ecmd->duplex == DUPLEX_FULL) {
4812 adv |= ADVERTISE_10FULL;
4813 np->speed_duplex = NV_SPEED_DUPLEX_10_FULL_DUPLEX;
4815 if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_HALF) {
4816 adv |= ADVERTISE_100HALF;
4817 np->speed_duplex = NV_SPEED_DUPLEX_100_HALF_DUPLEX;
4819 if (ecmd->speed == SPEED_100 && ecmd->duplex == DUPLEX_FULL) {
4820 adv |= ADVERTISE_100FULL;
4821 np->speed_duplex = NV_SPEED_DUPLEX_100_FULL_DUPLEX;
4823 np->pause_flags &= ~(NV_PAUSEFRAME_AUTONEG|NV_PAUSEFRAME_RX_ENABLE|NV_PAUSEFRAME_TX_ENABLE);
4824 if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) {/* for rx we set both advertisments but disable tx pause */
4825 adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
4826 np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
4828 if (np->pause_flags & NV_PAUSEFRAME_TX_REQ) {
4829 adv |= ADVERTISE_PAUSE_ASYM;
4830 np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
4832 mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
4833 np->fixed_mode = adv;
4835 if (np->gigabit == PHY_GIGABIT) {
4836 adv = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
4837 adv &= ~ADVERTISE_1000FULL;
4838 mii_rw(dev, np->phyaddr, MII_CTRL1000, adv);
4841 bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
4842 bmcr &= ~(BMCR_ANENABLE|BMCR_SPEED100|BMCR_SPEED1000|BMCR_FULLDPLX);
4843 if (np->fixed_mode & (ADVERTISE_10FULL|ADVERTISE_100FULL))
4844 bmcr |= BMCR_FULLDPLX;
4845 if (np->fixed_mode & (ADVERTISE_100HALF|ADVERTISE_100FULL))
4846 bmcr |= BMCR_SPEED100;
4847 if (np->phy_oui == PHY_OUI_MARVELL) {
4848 /* reset the phy in order for forced mode settings to stick */
4849 if (phy_reset(dev, bmcr)) {
4850 printk(KERN_INFO "%s: phy reset failed\n", dev->name);
4851 return -EINVAL;
4853 } else {
4854 mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
4855 if (netif_running(dev)) {
4856 /* Wait a bit and then reconfigure the nic. */
4857 udelay(10);
4858 nv_linkchange(dev);
4863 if (netif_running(dev)) {
4864 nv_start_rx(dev);
4865 nv_start_tx(dev);
4866 nv_enable_hw_interrupts(dev, np->irqmask);
4869 return 0;
4872 #define FORCEDETH_REGS_VER 1
4874 static int nv_get_regs_len(struct net_device *dev)
4876 struct fe_priv *np = get_nvpriv(dev);
4877 return np->register_size;
4880 static void nv_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *buf)
4882 struct fe_priv *np = get_nvpriv(dev);
4883 u8 __iomem *base = get_hwbase(dev);
4884 u32 *rbuf = buf;
4885 int i;
4887 regs->version = FORCEDETH_REGS_VER;
4888 spin_lock_irq(&np->lock);
4889 for (i = 0;i <= np->register_size/sizeof(u32); i++)
4890 rbuf[i] = readl(base + i*sizeof(u32));
4891 spin_unlock_irq(&np->lock);
4894 static int nv_nway_reset(struct net_device *dev)
4896 struct fe_priv *np = get_nvpriv(dev);
4897 int ret;
4899 if (np->autoneg) {
4900 int bmcr;
4902 netif_carrier_off(dev);
4903 if (netif_running(dev)) {
4904 nv_disable_irq(dev);
4905 #if NVVER > FEDORA5
4906 netif_tx_lock_bh(dev);
4907 #else
4908 spin_lock_bh(&dev->xmit_lock);
4909 #endif
4910 spin_lock(&np->lock);
4911 /* stop engines */
4912 nv_stop_rx(dev);
4913 nv_stop_tx(dev);
4914 spin_unlock(&np->lock);
4915 #if NVVER > FEDORA5
4916 netif_tx_unlock_bh(dev);
4917 #else
4918 spin_unlock_bh(&dev->xmit_lock);
4919 #endif
4920 printk(KERN_INFO "%s: link down.\n", dev->name);
4923 bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
4924 if (np->phy_model == PHY_MODEL_MARVELL_E3016) {
4925 bmcr |= BMCR_ANENABLE;
4926 /* reset the phy in order for settings to stick*/
4927 if (phy_reset(dev, bmcr)) {
4928 printk(KERN_INFO "%s: phy reset failed\n", dev->name);
4929 return -EINVAL;
4931 } else {
4932 bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
4933 mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
4936 if (netif_running(dev)) {
4937 nv_start_rx(dev);
4938 nv_start_tx(dev);
4939 nv_enable_irq(dev);
4941 ret = 0;
4942 } else {
4943 ret = -EINVAL;
4946 return ret;
4949 static void nv_get_ringparam(struct net_device *dev, struct ethtool_ringparam* ring)
4951 struct fe_priv *np = get_nvpriv(dev);
4953 ring->rx_max_pending = (np->desc_ver == DESC_VER_1) ? RING_MAX_DESC_VER_1 : RING_MAX_DESC_VER_2_3;
4954 ring->rx_mini_max_pending = 0;
4955 ring->rx_jumbo_max_pending = 0;
4956 ring->tx_max_pending = (np->desc_ver == DESC_VER_1) ? RING_MAX_DESC_VER_1 : RING_MAX_DESC_VER_2_3;
4958 ring->rx_pending = np->rx_ring_size;
4959 ring->rx_mini_pending = 0;
4960 ring->rx_jumbo_pending = 0;
4961 ring->tx_pending = np->tx_ring_size;
4964 static int nv_set_ringparam(struct net_device *dev, struct ethtool_ringparam* ring)
4966 struct fe_priv *np = get_nvpriv(dev);
4967 u8 __iomem *base = get_hwbase(dev);
4968 u8 *rxtx_ring, *rx_skbuff, *tx_skbuff;
4969 dma_addr_t ring_addr;
4971 if (ring->rx_pending < RX_RING_MIN ||
4972 ring->tx_pending < TX_RING_MIN ||
4973 ring->rx_mini_pending != 0 ||
4974 ring->rx_jumbo_pending != 0 ||
4975 (np->desc_ver == DESC_VER_1 &&
4976 (ring->rx_pending > RING_MAX_DESC_VER_1 ||
4977 ring->tx_pending > RING_MAX_DESC_VER_1)) ||
4978 (np->desc_ver != DESC_VER_1 &&
4979 (ring->rx_pending > RING_MAX_DESC_VER_2_3 ||
4980 ring->tx_pending > RING_MAX_DESC_VER_2_3))) {
4981 return -EINVAL;
4984 /* allocate new rings */
4985 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
4986 rxtx_ring = pci_alloc_consistent(np->pci_dev,
4987 sizeof(struct ring_desc) * (ring->rx_pending + ring->tx_pending),
4988 &ring_addr);
4989 } else {
4990 rxtx_ring = pci_alloc_consistent(np->pci_dev,
4991 sizeof(struct ring_desc_ex) * (ring->rx_pending + ring->tx_pending),
4992 &ring_addr);
4994 rx_skbuff = kmalloc(sizeof(struct nv_skb_map) * ring->rx_pending, GFP_KERNEL);
4995 tx_skbuff = kmalloc(sizeof(struct nv_skb_map) * ring->tx_pending, GFP_KERNEL);
4997 if (!rxtx_ring || !rx_skbuff || !tx_skbuff) {
4998 /* fall back to old rings */
4999 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
5000 if(rxtx_ring)
5001 pci_free_consistent(np->pci_dev, sizeof(struct ring_desc) * (ring->rx_pending + ring->tx_pending),
5002 rxtx_ring, ring_addr);
5003 } else {
5004 if (rxtx_ring)
5005 pci_free_consistent(np->pci_dev, sizeof(struct ring_desc_ex) * (ring->rx_pending + ring->tx_pending),
5006 rxtx_ring, ring_addr);
5008 if (rx_skbuff)
5009 kfree(rx_skbuff);
5010 if (tx_skbuff)
5011 kfree(tx_skbuff);
5012 goto exit;
5015 if (netif_running(dev)) {
5016 nv_disable_irq(dev);
5017 #if NVVER > FEDORA5
5018 netif_tx_lock_bh(dev);
5019 #else
5020 spin_lock_bh(&dev->xmit_lock);
5021 #endif
5022 spin_lock(&np->lock);
5023 /* stop engines */
5024 nv_stop_rx(dev);
5025 nv_stop_tx(dev);
5026 nv_txrx_reset(dev);
5027 /* drain queues */
5028 nv_drain_rx(dev);
5029 nv_drain_tx(dev);
5030 /* delete queues */
5031 free_rings(dev);
5034 /* set new values */
5035 np->rx_ring_size = ring->rx_pending;
5036 np->tx_ring_size = ring->tx_pending;
5037 np->tx_limit_stop =np->tx_ring_size - TX_LIMIT_DIFFERENCE;
5038 np->tx_limit_start =np->tx_ring_size - TX_LIMIT_DIFFERENCE - 1;
5039 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
5040 np->rx_ring.orig = (struct ring_desc*)rxtx_ring;
5041 np->tx_ring.orig = &np->rx_ring.orig[np->rx_ring_size];
5042 } else {
5043 np->rx_ring.ex = (struct ring_desc_ex*)rxtx_ring;
5044 np->tx_ring.ex = &np->rx_ring.ex[np->rx_ring_size];
5046 np->rx_skb = (struct nv_skb_map*)rx_skbuff;
5047 np->tx_skb = (struct nv_skb_map*)tx_skbuff;
5048 np->ring_addr = ring_addr;
5050 memset(np->rx_skb, 0, sizeof(struct nv_skb_map) * np->rx_ring_size);
5051 memset(np->tx_skb, 0, sizeof(struct nv_skb_map) * np->tx_ring_size);
5053 if (netif_running(dev)) {
5054 /* reinit driver view of the queues */
5055 set_bufsize(dev);
5056 if (nv_init_ring(dev)) {
5057 if (!np->in_shutdown)
5058 mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
5061 /* reinit nic view of the queues */
5062 writel(np->rx_buf_sz, base + NvRegOffloadConfig);
5063 setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
5064 writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
5065 base + NvRegRingSizes);
5066 pci_push(base);
5067 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
5068 pci_push(base);
5070 /* restart engines */
5071 nv_start_rx(dev);
5072 nv_start_tx(dev);
5073 spin_unlock(&np->lock);
5074 #if NVVER > FEDORA5
5075 netif_tx_unlock_bh(dev);
5076 #else
5077 spin_unlock_bh(&dev->xmit_lock);
5078 #endif
5079 nv_enable_irq(dev);
5081 return 0;
5082 exit:
5083 return -ENOMEM;
5086 static void nv_get_pauseparam(struct net_device *dev, struct ethtool_pauseparam* pause)
5088 struct fe_priv *np = get_nvpriv(dev);
5090 pause->autoneg = (np->pause_flags & NV_PAUSEFRAME_AUTONEG) != 0;
5091 pause->rx_pause = (np->pause_flags & NV_PAUSEFRAME_RX_ENABLE) != 0;
5092 pause->tx_pause = (np->pause_flags & NV_PAUSEFRAME_TX_ENABLE) != 0;
5095 static int nv_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam* pause)
5097 struct fe_priv *np = get_nvpriv(dev);
5098 int adv, bmcr;
5100 if ((!np->autoneg && np->duplex == 0) ||
5101 (np->autoneg && !pause->autoneg && np->duplex == 0)) {
5102 printk(KERN_INFO "%s: can not set pause settings when forced link is in half duplex.\n",
5103 dev->name);
5104 return -EINVAL;
5106 if (pause->tx_pause && !(np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE)) {
5107 printk(KERN_INFO "%s: hardware does not support tx pause frames.\n", dev->name);
5108 return -EINVAL;
5111 netif_carrier_off(dev);
5112 if (netif_running(dev)) {
5113 nv_disable_irq(dev);
5114 #if NVVER > FEDORA5
5115 netif_tx_lock_bh(dev);
5116 #else
5117 spin_lock_bh(&dev->xmit_lock);
5118 #endif
5119 spin_lock(&np->lock);
5120 /* stop engines */
5121 nv_stop_rx(dev);
5122 nv_stop_tx(dev);
5123 spin_unlock(&np->lock);
5124 #if NVVER > FEDORA5
5125 netif_tx_unlock_bh(dev);
5126 #else
5127 spin_unlock_bh(&dev->xmit_lock);
5128 #endif
5131 np->pause_flags &= ~(NV_PAUSEFRAME_RX_REQ|NV_PAUSEFRAME_TX_REQ);
5132 if (pause->rx_pause)
5133 np->pause_flags |= NV_PAUSEFRAME_RX_REQ;
5134 if (pause->tx_pause)
5135 np->pause_flags |= NV_PAUSEFRAME_TX_REQ;
5137 if (np->autoneg && pause->autoneg) {
5138 np->pause_flags |= NV_PAUSEFRAME_AUTONEG;
5140 adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
5141 adv &= ~(ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
5142 if (np->pause_flags & NV_PAUSEFRAME_RX_REQ) /* for rx we set both advertisments but disable tx pause */
5143 adv |= ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM;
5144 if (np->pause_flags & NV_PAUSEFRAME_TX_REQ)
5145 adv |= ADVERTISE_PAUSE_ASYM;
5146 mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
5148 if (netif_running(dev))
5149 printk(KERN_INFO "%s: link down.\n", dev->name);
5150 bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
5151 bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
5152 mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
5153 } else {
5154 np->pause_flags &= ~(NV_PAUSEFRAME_AUTONEG|NV_PAUSEFRAME_RX_ENABLE|NV_PAUSEFRAME_TX_ENABLE);
5155 if (pause->rx_pause)
5156 np->pause_flags |= NV_PAUSEFRAME_RX_ENABLE;
5157 if (pause->tx_pause)
5158 np->pause_flags |= NV_PAUSEFRAME_TX_ENABLE;
5160 if (!netif_running(dev))
5161 nv_update_linkspeed(dev);
5162 else
5163 nv_update_pause(dev, np->pause_flags);
5166 if (netif_running(dev)) {
5167 nv_start_rx(dev);
5168 nv_start_tx(dev);
5169 nv_enable_irq(dev);
5171 return 0;
5174 static u32 nv_get_rx_csum(struct net_device *dev)
5176 struct fe_priv *np = get_nvpriv(dev);
5177 return (np->rx_csum) != 0;
5180 static int nv_set_rx_csum(struct net_device *dev, u32 data)
5182 struct fe_priv *np = get_nvpriv(dev);
5183 u8 __iomem *base = get_hwbase(dev);
5184 int retcode = 0;
5186 if (np->driver_data & DEV_HAS_CHECKSUM) {
5188 if (data) {
5189 np->rx_csum = 1;
5190 np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK;
5191 } else {
5192 np->rx_csum = 0;
5193 /* vlan is dependent on rx checksum offload */
5194 if (!(np->vlanctl_bits & NVREG_VLANCONTROL_ENABLE))
5195 np->txrxctl_bits &= ~NVREG_TXRXCTL_RXCHECK;
5198 if (netif_running(dev)) {
5199 spin_lock_irq(&np->lock);
5200 writel(np->txrxctl_bits, base + NvRegTxRxControl);
5201 spin_unlock_irq(&np->lock);
5203 } else {
5204 return -EINVAL;
5207 return retcode;
5210 #ifdef NETIF_F_TSO
5211 static int nv_set_tso(struct net_device *dev, u32 data)
5213 struct fe_priv *np = get_nvpriv(dev);
5215 if (np->driver_data & DEV_HAS_CHECKSUM){
5216 #if NVVER < SUSE10
5217 if(data){
5218 if(ethtool_op_get_sg(dev)==0)
5219 return -EINVAL;
5221 #endif
5222 return ethtool_op_set_tso(dev, data);
5223 }else
5224 return -EINVAL;
5226 #endif
5228 static int nv_set_sg(struct net_device *dev, u32 data)
5230 struct fe_priv *np = get_nvpriv(dev);
5232 if (np->driver_data & DEV_HAS_CHECKSUM){
5233 #if NVVER < SUSE10
5234 if(data){
5235 if(ethtool_op_get_tx_csum(dev)==0)
5236 return -EINVAL;
5238 #ifdef NETIF_F_TSO
5239 if(!data)
5240 /* set tso off */
5241 nv_set_tso(dev,data);
5242 #endif
5243 #endif
5244 return ethtool_op_set_sg(dev, data);
5245 }else
5246 return -EINVAL;
5249 static int nv_set_tx_csum(struct net_device *dev, u32 data)
5251 struct fe_priv *np = get_nvpriv(dev);
5253 #if NVVER < SUSE10
5254 /* set sg off if tx off */
5255 if(!data)
5256 nv_set_sg(dev,data);
5257 #endif
5258 if (np->driver_data & DEV_HAS_CHECKSUM)
5259 #if NVVER > RHES4
5260 return ethtool_op_set_tx_hw_csum(dev, data);
5261 #else
5263 if (data)
5264 dev->features |= NETIF_F_IP_CSUM;
5265 else
5266 dev->features &= ~NETIF_F_IP_CSUM;
5267 return 0;
5269 #endif
5270 else
5271 return -EINVAL;
5274 static int nv_get_stats_count(struct net_device *dev)
5276 struct fe_priv *np = get_nvpriv(dev);
5278 if (np->driver_data & DEV_HAS_STATISTICS_V1)
5279 return NV_DEV_STATISTICS_V1_COUNT;
5280 else if (np->driver_data & DEV_HAS_STATISTICS_V2)
5281 return NV_DEV_STATISTICS_V2_COUNT;
5282 else
5283 return NV_DEV_STATISTICS_SW_COUNT;
5286 static void nv_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *estats, u64 *buffer)
5288 struct fe_priv *np = get_nvpriv(dev);
5290 /* update stats */
5291 nv_do_stats_poll((unsigned long)dev);
5293 memcpy(buffer, &np->estats, nv_get_stats_count(dev)*sizeof(u64));
5296 static int nv_self_test_count(struct net_device *dev)
5298 struct fe_priv *np = get_nvpriv(dev);
5300 if (np->driver_data & DEV_HAS_TEST_EXTENDED)
5301 return NV_TEST_COUNT_EXTENDED;
5302 else
5303 return NV_TEST_COUNT_BASE;
5306 static int nv_link_test(struct net_device *dev)
5308 struct fe_priv *np = get_nvpriv(dev);
5309 int mii_status;
5311 mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
5312 mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
5314 /* check phy link status */
5315 if (!(mii_status & BMSR_LSTATUS))
5316 return 0;
5317 else
5318 return 1;
5321 static int nv_register_test(struct net_device *dev)
5323 u8 __iomem *base = get_hwbase(dev);
5324 int i = 0;
5325 u32 orig_read, new_read;
5327 do {
5328 orig_read = readl(base + nv_registers_test[i].reg);
5330 /* xor with mask to toggle bits */
5331 orig_read ^= nv_registers_test[i].mask;
5333 writel(orig_read, base + nv_registers_test[i].reg);
5335 new_read = readl(base + nv_registers_test[i].reg);
5337 if ((new_read & nv_registers_test[i].mask) != (orig_read & nv_registers_test[i].mask))
5338 return 0;
5340 /* restore original value */
5341 orig_read ^= nv_registers_test[i].mask;
5342 writel(orig_read, base + nv_registers_test[i].reg);
5344 } while (nv_registers_test[++i].reg != 0);
5346 return 1;
5349 static int nv_interrupt_test(struct net_device *dev)
5351 struct fe_priv *np = get_nvpriv(dev);
5352 u8 __iomem *base = get_hwbase(dev);
5353 int ret = 1;
5354 int testcnt;
5355 u32 save_msi_flags, save_poll_interval = 0;
5357 if (netif_running(dev)) {
5358 /* free current irq */
5359 nv_free_irq(dev);
5360 save_poll_interval = readl(base+NvRegPollingInterval);
5363 /* flag to test interrupt handler */
5364 np->intr_test = 0;
5366 /* setup test irq */
5367 save_msi_flags = np->msi_flags;
5368 np->msi_flags &= ~NV_MSI_X_VECTORS_MASK;
5369 np->msi_flags |= 0x001; /* setup 1 vector */
5370 if (nv_request_irq(dev, 1))
5371 return 0;
5373 /* setup timer interrupt */
5374 writel(NVREG_POLL_DEFAULT_CPU, base + NvRegPollingInterval);
5375 writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6);
5377 nv_enable_hw_interrupts(dev, NVREG_IRQ_TIMER);
5379 /* wait for at least one interrupt */
5380 nv_msleep(100);
5382 spin_lock_irq(&np->lock);
5384 /* flag should be set within ISR */
5385 testcnt = np->intr_test;
5386 if (!testcnt)
5387 ret = 2;
5389 nv_disable_hw_interrupts(dev, NVREG_IRQ_TIMER);
5390 if (!(np->msi_flags & NV_MSI_X_ENABLED))
5391 writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
5392 else
5393 writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus);
5395 spin_unlock_irq(&np->lock);
5397 nv_free_irq(dev);
5399 np->msi_flags = save_msi_flags;
5401 if (netif_running(dev)) {
5402 writel(save_poll_interval, base + NvRegPollingInterval);
5403 writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6);
5404 /* restore original irq */
5405 if (nv_request_irq(dev, 0))
5406 return 0;
5409 return ret;
5412 static int nv_loopback_test(struct net_device *dev)
5414 struct fe_priv *np = get_nvpriv(dev);
5415 u8 __iomem *base = get_hwbase(dev);
5416 struct sk_buff *tx_skb, *rx_skb;
5417 dma_addr_t test_dma_addr;
5418 u32 tx_flags_extra = (np->desc_ver == DESC_VER_1 ? NV_TX_LASTPACKET : NV_TX2_LASTPACKET);
5419 u32 Flags;
5420 int len, i, pkt_len;
5421 u8 *pkt_data;
5422 u32 filter_flags = 0;
5423 u32 misc1_flags = 0;
5424 int ret = 1;
5426 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
5428 if (netif_running(dev)) {
5429 nv_disable_irq(dev);
5430 filter_flags = readl(base + NvRegPacketFilterFlags);
5431 misc1_flags = readl(base + NvRegMisc1);
5432 } else {
5433 nv_txrx_reset(dev);
5436 /* reinit driver view of the rx queue */
5437 set_bufsize(dev);
5438 nv_init_ring(dev);
5440 /* setup hardware for loopback */
5441 writel(NVREG_MISC1_FORCE, base + NvRegMisc1);
5442 writel(NVREG_PFF_ALWAYS | NVREG_PFF_LOOPBACK, base + NvRegPacketFilterFlags);
5444 /* reinit nic view of the rx queue */
5445 writel(np->rx_buf_sz, base + NvRegOffloadConfig);
5446 setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
5447 writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
5448 base + NvRegRingSizes);
5449 pci_push(base);
5451 /* restart rx engine */
5452 nv_start_rx(dev);
5453 nv_start_tx(dev);
5455 /* setup packet for tx */
5456 pkt_len = ETH_DATA_LEN;
5457 tx_skb = dev_alloc_skb(pkt_len);
5458 pkt_data = skb_put(tx_skb, pkt_len);
5459 for (i = 0; i < pkt_len; i++)
5460 pkt_data[i] = (u8)(i & 0xff);
5461 #if NVVER > FEDORA7
5462 test_dma_addr = pci_map_single(np->pci_dev, tx_skb->data,
5463 skb_tailroom(tx_skb), PCI_DMA_FROMDEVICE);
5464 #else
5465 test_dma_addr = pci_map_single(np->pci_dev, tx_skb->data,
5466 tx_skb->end-tx_skb->data, PCI_DMA_FROMDEVICE);
5467 #endif
5469 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
5470 np->tx_ring.orig[0].PacketBuffer = cpu_to_le32(test_dma_addr);
5471 np->tx_ring.orig[0].FlagLen = cpu_to_le32((pkt_len-1) | np->tx_flags | tx_flags_extra);
5472 } else {
5473 np->tx_ring.ex[0].PacketBufferHigh = cpu_to_le64(test_dma_addr) >> 32;
5474 np->tx_ring.ex[0].PacketBufferLow = cpu_to_le64(test_dma_addr) & 0x0FFFFFFFF;
5475 np->tx_ring.ex[0].FlagLen = cpu_to_le32((pkt_len-1) | np->tx_flags | tx_flags_extra);
5477 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
5478 pci_push(get_hwbase(dev));
5480 nv_msleep(500);
5482 /* check for rx of the packet */
5483 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
5484 Flags = le32_to_cpu(np->rx_ring.orig[0].FlagLen);
5485 len = nv_descr_getlength(&np->rx_ring.orig[0], np->desc_ver);
5487 } else {
5488 Flags = le32_to_cpu(np->rx_ring.ex[0].FlagLen);
5489 len = nv_descr_getlength_ex(&np->rx_ring.ex[0], np->desc_ver);
5492 if (Flags & NV_RX_AVAIL) {
5493 ret = 0;
5494 } else if (np->desc_ver == DESC_VER_1) {
5495 if (Flags & NV_RX_ERROR)
5496 ret = 0;
5497 } else {
5498 if (Flags & NV_RX2_ERROR) {
5499 ret = 0;
5503 if (ret) {
5504 if (len != pkt_len) {
5505 ret = 0;
5506 dprintk(KERN_DEBUG "%s: loopback len mismatch %d vs %d\n",
5507 dev->name, len, pkt_len);
5508 } else {
5509 rx_skb = np->rx_skb[0].skb;
5510 for (i = 0; i < pkt_len; i++) {
5511 if (rx_skb->data[i] != (u8)(i & 0xff)) {
5512 ret = 0;
5513 dprintk(KERN_DEBUG "%s: loopback pattern check failed on byte %d\n",
5514 dev->name, i);
5515 break;
5519 } else {
5520 dprintk(KERN_DEBUG "%s: loopback - did not receive test packet\n", dev->name);
5523 #if NVVER > FEDORA7
5524 pci_unmap_page(np->pci_dev, test_dma_addr,
5525 skb_end_pointer(tx_skb)-tx_skb->data,
5526 PCI_DMA_TODEVICE);
5527 #else
5528 pci_unmap_page(np->pci_dev, test_dma_addr,
5529 tx_skb->end-tx_skb->data,
5530 PCI_DMA_TODEVICE);
5531 #endif
5532 dev_kfree_skb_any(tx_skb);
5534 /* stop engines */
5535 nv_stop_rx(dev);
5536 nv_stop_tx(dev);
5537 nv_txrx_reset(dev);
5538 /* drain rx queue */
5539 nv_drain_rx(dev);
5540 nv_drain_tx(dev);
5542 if (netif_running(dev)) {
5543 writel(misc1_flags, base + NvRegMisc1);
5544 writel(filter_flags, base + NvRegPacketFilterFlags);
5545 nv_enable_irq(dev);
5548 return ret;
5551 static void nv_self_test(struct net_device *dev, struct ethtool_test *test, u64 *buffer)
5553 struct fe_priv *np = get_nvpriv(dev);
5554 u8 __iomem *base = get_hwbase(dev);
5555 int result;
5556 memset(buffer, 0, nv_self_test_count(dev)*sizeof(u64));
5558 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
5560 if (!nv_link_test(dev)) {
5561 test->flags |= ETH_TEST_FL_FAILED;
5562 buffer[0] = 1;
5565 if (test->flags & ETH_TEST_FL_OFFLINE) {
5566 if (netif_running(dev)) {
5567 netif_stop_queue(dev);
5568 #if NVVER > FEDORA5
5569 netif_tx_lock_bh(dev);
5570 #else
5571 spin_lock_bh(&dev->xmit_lock);
5572 #endif
5573 spin_lock_irq(&np->lock);
5574 nv_disable_hw_interrupts(dev, np->irqmask);
5575 if (!(np->msi_flags & NV_MSI_X_ENABLED)) {
5576 writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
5577 } else {
5578 writel(NVREG_IRQSTAT_MASK, base + NvRegMSIXIrqStatus);
5580 /* stop engines */
5581 nv_stop_rx(dev);
5582 nv_stop_tx(dev);
5583 nv_txrx_reset(dev);
5584 /* drain rx queue */
5585 nv_drain_rx(dev);
5586 nv_drain_tx(dev);
5587 spin_unlock_irq(&np->lock);
5588 #if NVVER > FEDORA5
5589 netif_tx_unlock_bh(dev);
5590 #else
5591 spin_unlock_bh(&dev->xmit_lock);
5592 #endif
5595 if (!nv_register_test(dev)) {
5596 test->flags |= ETH_TEST_FL_FAILED;
5597 buffer[1] = 1;
5600 result = nv_interrupt_test(dev);
5601 if (result != 1) {
5602 test->flags |= ETH_TEST_FL_FAILED;
5603 buffer[2] = 1;
5605 if (result == 0) {
5606 /* bail out */
5607 return;
5610 if (!nv_loopback_test(dev)) {
5611 test->flags |= ETH_TEST_FL_FAILED;
5612 buffer[3] = 1;
5615 if (netif_running(dev)) {
5616 /* reinit driver view of the rx queue */
5617 set_bufsize(dev);
5618 if (nv_init_ring(dev)) {
5619 if (!np->in_shutdown)
5620 mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
5622 /* reinit nic view of the rx queue */
5623 writel(np->rx_buf_sz, base + NvRegOffloadConfig);
5624 setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
5625 writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
5626 base + NvRegRingSizes);
5627 pci_push(base);
5628 writel(NVREG_TXRXCTL_KICK|np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
5629 pci_push(base);
5630 /* restart rx engine */
5631 nv_start_rx(dev);
5632 nv_start_tx(dev);
5633 netif_start_queue(dev);
5634 nv_enable_hw_interrupts(dev, np->irqmask);
5639 static void nv_get_strings(struct net_device *dev, u32 stringset, u8 *buffer)
5641 switch (stringset) {
5642 case ETH_SS_STATS:
5643 memcpy(buffer, &nv_estats_str, nv_get_stats_count(dev)*sizeof(struct nv_ethtool_str));
5644 break;
5645 case ETH_SS_TEST:
5646 memcpy(buffer, &nv_etests_str, nv_self_test_count(dev)*sizeof(struct nv_ethtool_str));
5647 break;
5651 static struct ethtool_ops ops = {
5652 .get_drvinfo = nv_get_drvinfo,
5653 .get_link = ethtool_op_get_link,
5654 .get_wol = nv_get_wol,
5655 .set_wol = nv_set_wol,
5656 .get_settings = nv_get_settings,
5657 .set_settings = nv_set_settings,
5658 .get_regs_len = nv_get_regs_len,
5659 .get_regs = nv_get_regs,
5660 .nway_reset = nv_nway_reset,
5661 #if NVVER < NVNEW
5662 #if NVVER > SUSE10
5663 .get_perm_addr = ethtool_op_get_perm_addr,
5664 #endif
5665 #endif
5666 .get_ringparam = nv_get_ringparam,
5667 .set_ringparam = nv_set_ringparam,
5668 .get_pauseparam = nv_get_pauseparam,
5669 .set_pauseparam = nv_set_pauseparam,
5670 .get_rx_csum = nv_get_rx_csum,
5671 .set_rx_csum = nv_set_rx_csum,
5672 .get_tx_csum = ethtool_op_get_tx_csum,
5673 .set_tx_csum = nv_set_tx_csum,
5674 .get_sg = ethtool_op_get_sg,
5675 .set_sg = nv_set_sg,
5676 #ifdef NETIF_F_TSO
5677 .get_tso = ethtool_op_get_tso,
5678 .set_tso = nv_set_tso,
5679 #endif
5680 .get_strings = nv_get_strings,
5681 .get_stats_count = nv_get_stats_count,
5682 .get_ethtool_stats = nv_get_ethtool_stats,
5683 .self_test_count = nv_self_test_count,
5684 .self_test = nv_self_test,
5685 };
5687 static void nv_vlan_rx_register(struct net_device *dev, struct vlan_group *grp)
5689 struct fe_priv *np = get_nvpriv(dev);
5691 spin_lock_irq(&np->lock);
5693 /* save vlan group */
5694 np->vlangrp = grp;
5696 if (grp) {
5697 /* enable vlan on MAC */
5698 np->txrxctl_bits |= NVREG_TXRXCTL_VLANSTRIP | NVREG_TXRXCTL_VLANINS;
5699 /* vlan is dependent on rx checksum */
5700 np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK;
5701 } else {
5702 /* disable vlan on MAC */
5703 np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANSTRIP;
5704 np->txrxctl_bits &= ~NVREG_TXRXCTL_VLANINS;
5705 if (!np->rx_csum)
5706 np->txrxctl_bits &= ~NVREG_TXRXCTL_RXCHECK;
5709 writel(np->txrxctl_bits, get_hwbase(dev) + NvRegTxRxControl);
5711 spin_unlock_irq(&np->lock);
5712 };
5714 static void nv_vlan_rx_kill_vid(struct net_device *dev, unsigned short vid)
5716 /* nothing to do */
5717 };
5719 /* The mgmt unit and driver use a semaphore to access the phy during init */
5720 static int nv_mgmt_acquire_sema(struct net_device *dev)
5722 u8 __iomem *base = get_hwbase(dev);
5723 int i;
5724 u32 tx_ctrl, mgmt_sema;
5726 for (i = 0; i < 10; i++) {
5727 mgmt_sema = readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_MGMT_SEMA_MASK;
5728 if (mgmt_sema == NVREG_XMITCTL_MGMT_SEMA_FREE) {
5729 dprintk(KERN_INFO "forcedeth: nv_mgmt_acquire_sema: sema is free\n");
5730 break;
5732 nv_msleep(500);
5735 if (mgmt_sema != NVREG_XMITCTL_MGMT_SEMA_FREE) {
5736 dprintk(KERN_INFO "forcedeth: nv_mgmt_acquire_sema: sema is not free\n");
5737 return 0;
5740 for (i = 0; i < 2; i++) {
5741 tx_ctrl = readl(base + NvRegTransmitterControl);
5742 tx_ctrl |= NVREG_XMITCTL_HOST_SEMA_ACQ;
5743 writel(tx_ctrl, base + NvRegTransmitterControl);
5745 /* verify that semaphore was acquired */
5746 tx_ctrl = readl(base + NvRegTransmitterControl);
5747 if (((tx_ctrl & NVREG_XMITCTL_HOST_SEMA_MASK) == NVREG_XMITCTL_HOST_SEMA_ACQ) &&
5748 ((tx_ctrl & NVREG_XMITCTL_MGMT_SEMA_MASK) == NVREG_XMITCTL_MGMT_SEMA_FREE)) {
5749 dprintk(KERN_INFO "forcedeth: nv_mgmt_acquire_sema: acquired sema\n");
5750 return 1;
5751 } else
5752 udelay(50);
5755 dprintk(KERN_INFO "forcedeth: nv_mgmt_acquire_sema: exit\n");
5756 return 0;
5759 static int nv_open(struct net_device *dev)
5761 struct fe_priv *np = get_nvpriv(dev);
5762 u8 __iomem *base = get_hwbase(dev);
5763 int ret = 1;
5764 u32 tx_ctrl;
5765 int oom, i;
5767 dprintk(KERN_DEBUG "nv_open: begin\n");
5769 /* erase previous misconfiguration */
5770 if (np->driver_data & DEV_HAS_POWER_CNTRL)
5771 nv_mac_reset(dev);
5772 /* stop adapter: ignored, 4.3 seems to be overkill */
5773 writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA);
5774 writel(0, base + NvRegMulticastAddrB);
5775 writel(NVREG_MCASTMASKA_NONE, base + NvRegMulticastMaskA);
5776 writel(NVREG_MCASTMASKB_NONE, base + NvRegMulticastMaskB);
5777 writel(0, base + NvRegPacketFilterFlags);
5779 if (np->mac_in_use){
5780 tx_ctrl = readl(base + NvRegTransmitterControl);
5781 tx_ctrl &= ~NVREG_XMITCTL_START;
5782 }else
5783 tx_ctrl = 0;
5784 writel(tx_ctrl, base + NvRegTransmitterControl);
5785 writel(0, base + NvRegReceiverControl);
5787 writel(0, base + NvRegAdapterControl);
5789 if (np->pause_flags & NV_PAUSEFRAME_TX_CAPABLE)
5790 writel(NVREG_TX_PAUSEFRAME_DISABLE, base + NvRegTxPauseFrame);
5792 /* initialize descriptor rings */
5793 set_bufsize(dev);
5794 oom = nv_init_ring(dev);
5796 writel(0, base + NvRegLinkSpeed);
5797 writel(readl(base + NvRegTransmitPoll) & NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll);
5798 nv_txrx_reset(dev);
5799 writel(0, base + NvRegUnknownSetupReg6);
5801 np->in_shutdown = 0;
5803 /* give hw rings */
5804 setup_hw_rings(dev, NV_SETUP_RX_RING | NV_SETUP_TX_RING);
5805 writel( ((np->rx_ring_size-1) << NVREG_RINGSZ_RXSHIFT) + ((np->tx_ring_size-1) << NVREG_RINGSZ_TXSHIFT),
5806 base + NvRegRingSizes);
5808 /* continue setup */
5809 writel(np->linkspeed, base + NvRegLinkSpeed);
5810 if (np->desc_ver == DESC_VER_1)
5811 writel(NVREG_TX_WM_DESC1_DEFAULT, base + NvRegTxWatermark);
5812 else
5813 writel(NVREG_TX_WM_DESC2_3_DEFAULT, base + NvRegTxWatermark);
5814 writel(np->txrxctl_bits, base + NvRegTxRxControl);
5815 writel(np->vlanctl_bits, base + NvRegVlanControl);
5816 pci_push(base);
5817 writel(NVREG_TXRXCTL_BIT1|np->txrxctl_bits, base + NvRegTxRxControl);
5818 reg_delay(dev, NvRegUnknownSetupReg5, NVREG_UNKSETUP5_BIT31, NVREG_UNKSETUP5_BIT31,
5819 NV_SETUP5_DELAY, NV_SETUP5_DELAYMAX,
5820 KERN_INFO "open: SetupReg5, Bit 31 remained off\n");
5822 writel(0, base + NvRegMIIMask);
5823 writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
5824 writel(NVREG_MIISTAT_MASK_ALL, base + NvRegMIIStatus);
5826 /* continue setup */
5827 writel(NVREG_MISC1_FORCE | NVREG_MISC1_HD, base + NvRegMisc1);
5828 writel(readl(base + NvRegTransmitterStatus), base + NvRegTransmitterStatus);
5829 writel(NVREG_PFF_ALWAYS, base + NvRegPacketFilterFlags);
5830 writel(np->rx_buf_sz, base + NvRegOffloadConfig);
5832 writel(readl(base + NvRegReceiverStatus), base + NvRegReceiverStatus);
5833 get_random_bytes(&i, sizeof(i));
5834 writel(NVREG_RNDSEED_FORCE | (i&NVREG_RNDSEED_MASK), base + NvRegRandomSeed);
5835 writel(NVREG_TX_DEFERRAL_DEFAULT, base + NvRegTxDeferral);
5836 writel(NVREG_RX_DEFERRAL_DEFAULT, base + NvRegRxDeferral);
5837 if (poll_interval == -1) {
5838 if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT)
5839 writel(NVREG_POLL_DEFAULT_THROUGHPUT, base + NvRegPollingInterval);
5840 else
5841 writel(NVREG_POLL_DEFAULT_CPU, base + NvRegPollingInterval);
5843 else
5844 writel(poll_interval & 0xFFFF, base + NvRegPollingInterval);
5845 writel(NVREG_UNKSETUP6_VAL, base + NvRegUnknownSetupReg6);
5846 writel((np->phyaddr << NVREG_ADAPTCTL_PHYSHIFT)|NVREG_ADAPTCTL_PHYVALID|NVREG_ADAPTCTL_RUNNING,
5847 base + NvRegAdapterControl);
5848 writel(NVREG_MIISPEED_BIT8|NVREG_MIIDELAY, base + NvRegMIISpeed);
5849 writel(NVREG_MII_LINKCHANGE, base + NvRegMIIMask);
5850 if (np->wolenabled)
5851 writel(NVREG_WAKEUPFLAGS_ENABLE , base + NvRegWakeUpFlags);
5853 i = readl(base + NvRegPowerState);
5854 if ( (i & NVREG_POWERSTATE_POWEREDUP) == 0)
5855 writel(NVREG_POWERSTATE_POWEREDUP|i, base + NvRegPowerState);
5857 pci_push(base);
5858 udelay(10);
5859 writel(readl(base + NvRegPowerState) | NVREG_POWERSTATE_VALID, base + NvRegPowerState);
5861 nv_disable_hw_interrupts(dev, np->irqmask);
5862 pci_push(base);
5863 writel(NVREG_MIISTAT_MASK_ALL, base + NvRegMIIStatus);
5864 writel(NVREG_IRQSTAT_MASK, base + NvRegIrqStatus);
5865 pci_push(base);
5867 if (nv_request_irq(dev, 0)) {
5868 goto out_drain;
5871 /* ask for interrupts */
5872 nv_enable_hw_interrupts(dev, np->irqmask);
5874 spin_lock_irq(&np->lock);
5875 writel(NVREG_MCASTADDRA_FORCE, base + NvRegMulticastAddrA);
5876 writel(0, base + NvRegMulticastAddrB);
5877 writel(NVREG_MCASTMASKA_NONE, base + NvRegMulticastMaskA);
5878 writel(NVREG_MCASTMASKB_NONE, base + NvRegMulticastMaskB);
5879 writel(NVREG_PFF_ALWAYS|NVREG_PFF_MYADDR, base + NvRegPacketFilterFlags);
5880 /* One manual link speed update: Interrupts are enabled, future link
5881 * speed changes cause interrupts and are handled by nv_link_irq().
5882 */
5884 u32 miistat;
5885 miistat = readl(base + NvRegMIIStatus);
5886 writel(NVREG_MIISTAT_MASK_ALL, base + NvRegMIIStatus);
5887 dprintk(KERN_INFO "startup: got 0x%08x.\n", miistat);
5889 /* set linkspeed to invalid value, thus force nv_update_linkspeed
5890 * to init hw */
5891 np->linkspeed = 0;
5892 ret = nv_update_linkspeed(dev);
5893 nv_start_rx(dev);
5894 nv_start_tx(dev);
5895 netif_start_queue(dev);
5896 if (ret) {
5897 netif_carrier_on(dev);
5898 } else {
5899 dprintk(KERN_DEBUG "%s: no link during initialization.\n", dev->name);
5900 netif_carrier_off(dev);
5902 if (oom)
5903 mod_timer(&np->oom_kick, jiffies + OOM_REFILL);
5905 /* start statistics timer */
5906 mod_timer(&np->stats_poll, jiffies + STATS_INTERVAL);
5908 spin_unlock_irq(&np->lock);
5910 return 0;
5911 out_drain:
5912 drain_ring(dev);
5913 return ret;
5916 static int nv_close(struct net_device *dev)
5918 struct fe_priv *np = get_nvpriv(dev);
5919 u8 __iomem *base;
5921 dprintk(KERN_DEBUG "nv_close: begin\n");
5922 spin_lock_irq(&np->lock);
5923 np->in_shutdown = 1;
5924 spin_unlock_irq(&np->lock);
5926 #if NVVER > RHES3
5927 synchronize_irq(np->pci_dev->irq);
5928 #else
5929 synchronize_irq();
5930 #endif
5932 del_timer_sync(&np->oom_kick);
5933 del_timer_sync(&np->nic_poll);
5934 del_timer_sync(&np->stats_poll);
5936 netif_stop_queue(dev);
5937 spin_lock_irq(&np->lock);
5938 nv_stop_tx(dev);
5939 nv_stop_rx(dev);
5940 nv_txrx_reset(dev);
5942 /* disable interrupts on the nic or we will lock up */
5943 base = get_hwbase(dev);
5944 nv_disable_hw_interrupts(dev, np->irqmask);
5945 pci_push(base);
5946 dprintk(KERN_INFO "%s: Irqmask is zero again\n", dev->name);
5948 spin_unlock_irq(&np->lock);
5950 nv_free_irq(dev);
5952 drain_ring(dev);
5954 if (np->wolenabled)
5955 nv_start_rx(dev);
5957 /* FIXME: power down nic */
5959 return 0;
5962 static int __devinit nv_probe(struct pci_dev *pci_dev, const struct pci_device_id *id)
5964 struct net_device *dev;
5965 struct fe_priv *np;
5966 unsigned long addr;
5967 u8 __iomem *base;
5968 int err, i;
5969 u32 powerstate, phystate_orig = 0, phystate, txreg,reg,mii_status;
5970 int phyinitialized = 0;
5972 /* modify network device class id */
5973 quirk_nforce_network_class(pci_dev);
5974 dev = alloc_etherdev(sizeof(struct fe_priv));
5975 err = -ENOMEM;
5976 if (!dev)
5977 goto out;
5979 dprintk(KERN_DEBUG "%s:%s\n",dev->name,__FUNCTION__);
5980 np = get_nvpriv(dev);
5981 np->pci_dev = pci_dev;
5982 spin_lock_init(&np->lock);
5983 spin_lock_init(&np->timer_lock);
5984 SET_MODULE_OWNER(dev);
5985 SET_NETDEV_DEV(dev, &pci_dev->dev);
5987 init_timer(&np->oom_kick);
5988 np->oom_kick.data = (unsigned long) dev;
5989 np->oom_kick.function = &nv_do_rx_refill; /* timer handler */
5990 init_timer(&np->nic_poll);
5991 np->nic_poll.data = (unsigned long) dev;
5992 np->nic_poll.function = &nv_do_nic_poll; /* timer handler */
5993 init_timer(&np->stats_poll);
5994 np->stats_poll.data = (unsigned long) dev;
5995 np->stats_poll.function = &nv_do_stats_poll; /* timer handler */
5997 err = pci_enable_device(pci_dev);
5998 if (err) {
5999 printk(KERN_INFO "forcedeth: pci_enable_dev failed (%d) for device %s\n",
6000 err, pci_name(pci_dev));
6001 goto out_free;
6004 pci_set_master(pci_dev);
6006 err = pci_request_regions(pci_dev, DRV_NAME);
6007 if (err < 0)
6008 goto out_disable;
6010 if (id->driver_data & (DEV_HAS_VLAN|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_STATISTICS_V2))
6011 np->register_size = NV_PCI_REGSZ_VER3;
6012 else if (id->driver_data & DEV_HAS_STATISTICS_V1)
6013 np->register_size = NV_PCI_REGSZ_VER2;
6014 else
6015 np->register_size = NV_PCI_REGSZ_VER1;
6017 err = -EINVAL;
6018 addr = 0;
6019 for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
6020 dprintk(KERN_DEBUG "%s: resource %d start %p len %ld flags 0x%08lx.\n",
6021 pci_name(pci_dev), i, (void*)pci_resource_start(pci_dev, i),
6022 (long)pci_resource_len(pci_dev, i),
6023 (long)pci_resource_flags(pci_dev, i));
6024 if (pci_resource_flags(pci_dev, i) & IORESOURCE_MEM &&
6025 pci_resource_len(pci_dev, i) >= np->register_size) {
6026 addr = pci_resource_start(pci_dev, i);
6027 break;
6030 if (i == DEVICE_COUNT_RESOURCE) {
6031 printk(KERN_INFO "forcedeth: Couldn't find register window for device %s.\n",
6032 pci_name(pci_dev));
6033 goto out_relreg;
6036 /* copy of driver data */
6037 np->driver_data = id->driver_data;
6039 /* handle different descriptor versions */
6040 if (id->driver_data & DEV_HAS_HIGH_DMA) {
6041 /* packet format 3: supports 40-bit addressing */
6042 np->desc_ver = DESC_VER_3;
6043 np->txrxctl_bits = NVREG_TXRXCTL_DESC_3;
6044 if (dma_64bit) {
6045 if (pci_set_dma_mask(pci_dev, DMA_39BIT_MASK)) {
6046 printk(KERN_INFO "forcedeth: 64-bit DMA failed, using 32-bit addressing for device %s.\n",
6047 pci_name(pci_dev));
6048 } else {
6049 dev->features |= NETIF_F_HIGHDMA;
6050 printk(KERN_INFO "forcedeth: using HIGHDMA\n");
6052 #if NVVER > RHES3
6053 if (pci_set_consistent_dma_mask(pci_dev, DMA_39BIT_MASK)) {
6054 printk(KERN_INFO "forcedeth: 64-bit DMA (consistent) failed, using 32-bit ring buffers for device %s.\n",
6055 pci_name(pci_dev));
6057 #endif
6059 } else if (id->driver_data & DEV_HAS_LARGEDESC) {
6060 /* packet format 2: supports jumbo frames */
6061 np->desc_ver = DESC_VER_2;
6062 np->txrxctl_bits = NVREG_TXRXCTL_DESC_2;
6063 } else {
6064 /* original packet format */
6065 np->desc_ver = DESC_VER_1;
6066 np->txrxctl_bits = NVREG_TXRXCTL_DESC_1;
6069 np->pkt_limit = NV_PKTLIMIT_1;
6070 if (id->driver_data & DEV_HAS_LARGEDESC)
6071 np->pkt_limit = NV_PKTLIMIT_2;
6072 if (mtu > np->pkt_limit) {
6073 printk(KERN_INFO "forcedeth: MTU value of %d is too large. Setting to maximum value of %d\n",
6074 mtu, np->pkt_limit);
6075 dev->mtu = np->pkt_limit;
6076 } else {
6077 dev->mtu = mtu;
6080 if (id->driver_data & DEV_HAS_CHECKSUM) {
6081 if (rx_checksum_offload) {
6082 np->rx_csum = 1;
6083 np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK;
6086 if (tx_checksum_offload)
6087 #if NVVER > RHES4
6088 dev->features |= NETIF_F_HW_CSUM;
6089 #else
6090 dev->features |= NETIF_F_IP_CSUM;
6091 #endif
6093 if (scatter_gather)
6094 dev->features |= NETIF_F_SG;
6095 #ifdef NETIF_F_TSO
6096 if (tso_offload)
6097 dev->features |= NETIF_F_TSO;
6098 #endif
6101 np->vlanctl_bits = 0;
6102 if (id->driver_data & DEV_HAS_VLAN && tagging_8021pq) {
6103 np->vlanctl_bits = NVREG_VLANCONTROL_ENABLE;
6104 dev->features |= NETIF_F_HW_VLAN_RX | NETIF_F_HW_VLAN_TX;
6105 dev->vlan_rx_register = nv_vlan_rx_register;
6106 dev->vlan_rx_kill_vid = nv_vlan_rx_kill_vid;
6107 /* vlan needs rx checksum support, so force it */
6108 np->txrxctl_bits |= NVREG_TXRXCTL_RXCHECK;
6111 np->msi_flags = 0;
6112 if ((id->driver_data & DEV_HAS_MSI) && msi) {
6113 np->msi_flags |= NV_MSI_CAPABLE;
6115 if ((id->driver_data & DEV_HAS_MSI_X) && msix) {
6116 np->msi_flags |= NV_MSI_X_CAPABLE;
6119 np->pause_flags = NV_PAUSEFRAME_RX_CAPABLE;
6120 if (rx_flow_control == NV_RX_FLOW_CONTROL_ENABLED)
6121 np->pause_flags |= NV_PAUSEFRAME_RX_REQ;
6122 if ((id->driver_data & DEV_HAS_PAUSEFRAME_TX_V1) ||
6123 (id->driver_data & DEV_HAS_PAUSEFRAME_TX_V2)||
6124 (id->driver_data & DEV_HAS_PAUSEFRAME_TX_V3))
6126 np->pause_flags |= NV_PAUSEFRAME_TX_CAPABLE;
6127 if (tx_flow_control == NV_TX_FLOW_CONTROL_ENABLED)
6128 np->pause_flags |= NV_PAUSEFRAME_TX_REQ;
6130 if (autoneg == AUTONEG_ENABLE) {
6131 np->pause_flags |= NV_PAUSEFRAME_AUTONEG;
6132 } else if (speed_duplex == NV_SPEED_DUPLEX_1000_FULL_DUPLEX) {
6133 printk(KERN_INFO "forcedeth: speed_duplex of 1000 full can not enabled if autoneg is disabled\n");
6134 goto out_relreg;
6137 /* save phy config */
6138 np->autoneg = autoneg;
6139 np->speed_duplex = speed_duplex;
6141 err = -ENOMEM;
6142 np->base = ioremap(addr, np->register_size);
6143 if (!np->base)
6144 goto out_relreg;
6145 dev->base_addr = (unsigned long)np->base;
6147 /* stop engines */
6148 nv_stop_rx(dev);
6149 nv_stop_tx(dev);
6150 nv_txrx_reset(dev);
6152 dev->irq = pci_dev->irq;
6154 if (np->desc_ver == DESC_VER_1) {
6155 if (rx_ring_size > RING_MAX_DESC_VER_1) {
6156 printk(KERN_INFO "forcedeth: rx_ring_size of %d is too large. Setting to maximum of %d\n",
6157 rx_ring_size, RING_MAX_DESC_VER_1);
6158 rx_ring_size = RING_MAX_DESC_VER_1;
6160 if (tx_ring_size > RING_MAX_DESC_VER_1) {
6161 printk(KERN_INFO "forcedeth: tx_ring_size of %d is too large. Setting to maximum of %d\n",
6162 tx_ring_size, RING_MAX_DESC_VER_1);
6163 tx_ring_size = RING_MAX_DESC_VER_1;
6165 } else {
6166 if (rx_ring_size > RING_MAX_DESC_VER_2_3) {
6167 printk(KERN_INFO "forcedeth: rx_ring_size of %d is too large. Setting to maximum of %d\n",
6168 rx_ring_size, RING_MAX_DESC_VER_2_3);
6169 rx_ring_size = RING_MAX_DESC_VER_2_3;
6171 if (tx_ring_size > RING_MAX_DESC_VER_2_3) {
6172 printk(KERN_INFO "forcedeth: tx_ring_size of %d is too large. Setting to maximum of %d\n",
6173 tx_ring_size, RING_MAX_DESC_VER_2_3);
6174 tx_ring_size = RING_MAX_DESC_VER_2_3;
6177 np->rx_ring_size = rx_ring_size;
6178 np->tx_ring_size = tx_ring_size;
6179 np->tx_limit_stop = tx_ring_size - TX_LIMIT_DIFFERENCE;
6180 np->tx_limit_start = tx_ring_size - TX_LIMIT_DIFFERENCE - 1;
6182 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2) {
6183 np->rx_ring.orig = pci_alloc_consistent(pci_dev,
6184 sizeof(struct ring_desc) * (np->rx_ring_size + np->tx_ring_size),
6185 &np->ring_addr);
6186 if (!np->rx_ring.orig)
6187 goto out_unmap;
6188 np->tx_ring.orig = &np->rx_ring.orig[np->rx_ring_size];
6189 } else {
6190 np->rx_ring.ex = pci_alloc_consistent(pci_dev,
6191 sizeof(struct ring_desc_ex) * (np->rx_ring_size + np->tx_ring_size),
6192 &np->ring_addr);
6193 if (!np->rx_ring.ex)
6194 goto out_unmap;
6195 np->tx_ring.ex = &np->rx_ring.ex[np->rx_ring_size];
6197 np->rx_skb = kmalloc(sizeof(struct nv_skb_map) * np->rx_ring_size, GFP_KERNEL);
6198 np->tx_skb = kmalloc(sizeof(struct nv_skb_map) * np->tx_ring_size, GFP_KERNEL);
6199 if (!np->rx_skb || !np->tx_skb)
6200 goto out_freering;
6201 memset(np->rx_skb, 0, sizeof(struct nv_skb_map) * np->rx_ring_size);
6202 memset(np->tx_skb, 0, sizeof(struct nv_skb_map) * np->tx_ring_size);
6204 dev->open = nv_open;
6205 dev->stop = nv_close;
6206 if (np->desc_ver == DESC_VER_1 || np->desc_ver == DESC_VER_2)
6207 dev->hard_start_xmit = nv_start_xmit;
6208 else
6209 dev->hard_start_xmit = nv_start_xmit_optimized;
6210 dev->get_stats = nv_get_stats;
6211 dev->change_mtu = nv_change_mtu;
6212 dev->set_mac_address = nv_set_mac_address;
6213 dev->set_multicast_list = nv_set_multicast;
6215 #if NVVER < SLES9
6216 dev->do_ioctl = nv_ioctl;
6217 #endif
6219 #if NVVER > RHES3
6220 #ifdef CONFIG_NET_POLL_CONTROLLER
6221 dev->poll_controller = nv_poll_controller;
6222 #endif
6223 #else
6224 dev->poll_controller = nv_poll_controller;
6225 #endif
6227 SET_ETHTOOL_OPS(dev, &ops);
6228 dev->tx_timeout = nv_tx_timeout;
6229 dev->watchdog_timeo = NV_WATCHDOG_TIMEO;
6231 pci_set_drvdata(pci_dev, dev);
6233 /* read the mac address */
6234 base = get_hwbase(dev);
6235 np->orig_mac[0] = readl(base + NvRegMacAddrA);
6236 np->orig_mac[1] = readl(base + NvRegMacAddrB);
6238 /* check the workaround bit for correct mac address order */
6239 txreg = readl(base + NvRegTransmitPoll);
6240 if ((txreg & NVREG_TRANSMITPOLL_MAC_ADDR_REV) ||
6241 (id->driver_data & DEV_HAS_CORRECT_MACADDR)) {
6242 /* mac address is already in correct order */
6243 dev->dev_addr[0] = (np->orig_mac[0] >> 0) & 0xff;
6244 dev->dev_addr[1] = (np->orig_mac[0] >> 8) & 0xff;
6245 dev->dev_addr[2] = (np->orig_mac[0] >> 16) & 0xff;
6246 dev->dev_addr[3] = (np->orig_mac[0] >> 24) & 0xff;
6247 dev->dev_addr[4] = (np->orig_mac[1] >> 0) & 0xff;
6248 dev->dev_addr[5] = (np->orig_mac[1] >> 8) & 0xff;
6249 } else {
6250 dev->dev_addr[0] = (np->orig_mac[1] >> 8) & 0xff;
6251 dev->dev_addr[1] = (np->orig_mac[1] >> 0) & 0xff;
6252 dev->dev_addr[2] = (np->orig_mac[0] >> 24) & 0xff;
6253 dev->dev_addr[3] = (np->orig_mac[0] >> 16) & 0xff;
6254 dev->dev_addr[4] = (np->orig_mac[0] >> 8) & 0xff;
6255 dev->dev_addr[5] = (np->orig_mac[0] >> 0) & 0xff;
6256 /* set permanent address to be correct aswell */
6257 np->orig_mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) +
6258 (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24);
6259 np->orig_mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8);
6260 writel(txreg|NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll);
6262 #if NVVER > SUSE10
6263 memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
6265 if (!is_valid_ether_addr(dev->perm_addr)){
6266 #else
6267 if (!is_valid_ether_addr(dev->dev_addr)) {
6268 #endif
6269 /*
6270 * Bad mac address. At least one bios sets the mac address
6271 * to 01:23:45:67:89:ab
6272 */
6273 printk(KERN_ERR "%s: Invalid Mac address detected: %02x:%02x:%02x:%02x:%02x:%02x\n",
6274 pci_name(pci_dev),
6275 dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
6276 dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
6277 printk(KERN_ERR "Please complain to your hardware vendor. Switching to a random MAC.\n");
6278 dev->dev_addr[0] = 0x00;
6279 dev->dev_addr[1] = 0x00;
6280 dev->dev_addr[2] = 0x6c;
6281 get_random_bytes(&dev->dev_addr[3], 3);
6284 dprintk(KERN_DEBUG "%s: MAC Address %02x:%02x:%02x:%02x:%02x:%02x\n", pci_name(pci_dev),
6285 dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
6286 dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
6287 /* set mac address */
6288 nv_copy_mac_to_hw(dev);
6290 /* disable WOL */
6291 writel(0, base + NvRegWakeUpFlags);
6292 np->wolenabled = wol;
6294 if (id->driver_data & DEV_HAS_POWER_CNTRL) {
6295 u8 revision_id;
6296 pci_read_config_byte(pci_dev, PCI_REVISION_ID, &revision_id);
6298 /* take phy and nic out of low power mode */
6299 powerstate = readl(base + NvRegPowerState2);
6300 powerstate &= ~NVREG_POWERSTATE2_POWERUP_MASK;
6301 if ((id->device == PCI_DEVICE_ID_NVIDIA_NVENET_12 ||
6302 id->device == PCI_DEVICE_ID_NVIDIA_NVENET_13) &&
6303 revision_id >= 0xA3)
6304 powerstate |= NVREG_POWERSTATE2_POWERUP_REV_A3;
6305 writel(powerstate, base + NvRegPowerState2);
6308 if (np->desc_ver == DESC_VER_1) {
6309 np->tx_flags = NV_TX_VALID;
6310 } else {
6311 np->tx_flags = NV_TX2_VALID;
6313 if (optimization_mode == NV_OPTIMIZATION_MODE_THROUGHPUT) {
6314 np->irqmask = NVREG_IRQMASK_THROUGHPUT;
6315 if (np->msi_flags & NV_MSI_X_CAPABLE) /* set number of vectors */
6316 np->msi_flags |= 0x0003;
6317 } else {
6318 np->irqmask = NVREG_IRQMASK_CPU;
6319 if (np->msi_flags & NV_MSI_X_CAPABLE) /* set number of vectors */
6320 np->msi_flags |= 0x0001;
6323 if (id->driver_data & DEV_NEED_TIMERIRQ)
6324 np->irqmask |= NVREG_IRQ_TIMER;
6325 if (id->driver_data & DEV_NEED_LINKTIMER) {
6326 dprintk(KERN_INFO "%s: link timer on.\n", pci_name(pci_dev));
6327 np->need_linktimer = 1;
6328 np->link_timeout = jiffies + LINK_TIMEOUT;
6329 } else {
6330 dprintk(KERN_INFO "%s: link timer off.\n", pci_name(pci_dev));
6331 np->need_linktimer = 0;
6334 /* clear phy state and temporarily halt phy interrupts */
6335 writel(0, base + NvRegMIIMask);
6336 phystate = readl(base + NvRegAdapterControl);
6337 if (phystate & NVREG_ADAPTCTL_RUNNING) {
6338 phystate_orig = 1;
6339 phystate &= ~NVREG_ADAPTCTL_RUNNING;
6340 writel(phystate, base + NvRegAdapterControl);
6342 writel(NVREG_MIISTAT_MASK_ALL, base + NvRegMIIStatus);
6344 if (id->driver_data & DEV_HAS_MGMT_UNIT) {
6345 /* management unit running on the mac? */
6346 if (readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_SYNC_PHY_INIT) {
6347 np->mac_in_use = readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_MGMT_ST;
6348 dprintk(KERN_INFO "%s: mgmt unit is running. mac in use %x.\n", pci_name(pci_dev), np->mac_in_use);
6349 for (i = 0; i < 5000; i++) {
6350 nv_msleep(1);
6351 if (nv_mgmt_acquire_sema(dev)) {
6352 /* management unit setup the phy already? */
6353 if ((readl(base + NvRegTransmitterControl) & NVREG_XMITCTL_SYNC_MASK) ==
6354 NVREG_XMITCTL_SYNC_PHY_INIT) {
6355 if(np->mac_in_use){
6356 /* phy is inited by mgmt unit */
6357 phyinitialized = 1;
6358 dprintk(KERN_INFO "%s: Phy already initialized by mgmt unit.\n", pci_name(pci_dev));
6360 } else {
6361 /* we need to init the phy */
6363 break;
6369 /* find a suitable phy */
6370 for (i = 1; i <= 32; i++) {
6371 int id1, id2;
6372 int phyaddr = i & 0x1F;
6374 spin_lock_irq(&np->lock);
6375 id1 = mii_rw(dev, phyaddr, MII_PHYSID1, MII_READ);
6376 spin_unlock_irq(&np->lock);
6377 if (id1 < 0 || id1 == 0xffff)
6378 continue;
6379 spin_lock_irq(&np->lock);
6380 id2 = mii_rw(dev, phyaddr, MII_PHYSID2, MII_READ);
6381 spin_unlock_irq(&np->lock);
6382 if (id2 < 0 || id2 == 0xffff)
6383 continue;
6385 np->phy_model = id2 & PHYID2_MODEL_MASK;
6386 id1 = (id1 & PHYID1_OUI_MASK) << PHYID1_OUI_SHFT;
6387 id2 = (id2 & PHYID2_OUI_MASK) >> PHYID2_OUI_SHFT;
6388 dprintk(KERN_DEBUG "%s: open: Found PHY %04x:%04x at address %d.\n",
6389 pci_name(pci_dev), id1, id2, phyaddr);
6390 np->phyaddr = phyaddr;
6391 np->phy_oui = id1 | id2;
6392 break;
6394 if (i == 33) {
6395 printk(KERN_INFO "%s: open: Could not find a valid PHY.\n",
6396 pci_name(pci_dev));
6397 goto out_error;
6400 if (!phyinitialized) {
6401 /* reset it */
6402 phy_init(dev);
6403 np->autoneg = autoneg;
6404 } else {
6405 /* see if it is a gigabit phy */
6406 mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
6407 if (mii_status & PHY_GIGABIT) {
6408 np->gigabit = PHY_GIGABIT;
6410 reg = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
6411 np->autoneg = (reg & BMCR_ANENABLE ? AUTONEG_ENABLE:AUTONEG_DISABLE);
6412 if(np->autoneg == AUTONEG_DISABLE){
6413 reg = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
6414 np->fixed_mode = reg;
6418 if (np->phy_oui== PHY_OUI_MARVELL && np->phy_model == PHY_MODEL_MARVELL_E1011 && np->pci_dev->subsystem_vendor ==0x108E && np->pci_dev->subsystem_device==0x6676 ) {
6419 nv_LED_on(dev);
6422 /* set default link speed settings */
6423 np->linkspeed = NVREG_LINKSPEED_FORCE|NVREG_LINKSPEED_10;
6424 np->duplex = 0;
6426 err = register_netdev(dev);
6427 if (err) {
6428 printk(KERN_INFO "forcedeth: unable to register netdev: %d\n", err);
6429 goto out_error;
6431 printk(KERN_INFO "%s: forcedeth.c: subsystem: %05x:%04x bound to %s\n",
6432 dev->name, pci_dev->subsystem_vendor, pci_dev->subsystem_device,
6433 pci_name(pci_dev));
6435 return 0;
6437 out_error:
6438 if (phystate_orig)
6439 writel(phystate|NVREG_ADAPTCTL_RUNNING, base + NvRegAdapterControl);
6440 pci_set_drvdata(pci_dev, NULL);
6441 out_freering:
6442 free_rings(dev);
6443 out_unmap:
6444 iounmap(get_hwbase(dev));
6445 out_relreg:
6446 pci_release_regions(pci_dev);
6447 out_disable:
6448 pci_disable_device(pci_dev);
6449 out_free:
6450 free_netdev(dev);
6451 out:
6452 return err;
6455 #ifdef CONFIG_PM
6456 static void nv_set_low_speed(struct net_device *dev);
6457 #endif
6458 static void __devexit nv_remove(struct pci_dev *pci_dev)
6460 struct net_device *dev = pci_get_drvdata(pci_dev);
6461 struct fe_priv *np = get_nvpriv(dev);
6462 u8 __iomem *base = get_hwbase(dev);
6463 u32 tx_ctrl;
6465 if (np->phy_oui== PHY_OUI_MARVELL && np->phy_model == PHY_MODEL_MARVELL_E1011 && np->pci_dev->subsystem_vendor ==0x108E && np->pci_dev->subsystem_device==0x6676) {
6466 nv_LED_off(dev);
6468 unregister_netdev(dev);
6469 /* special op: write back the misordered MAC address - otherwise
6470 * the next nv_probe would see a wrong address.
6471 */
6472 writel(np->orig_mac[0], base + NvRegMacAddrA);
6473 writel(np->orig_mac[1], base + NvRegMacAddrB);
6475 /* relinquish control of the semaphore */
6476 if (np->mac_in_use){
6477 tx_ctrl = readl(base + NvRegTransmitterControl);
6478 tx_ctrl &= ~NVREG_XMITCTL_HOST_SEMA_MASK;
6479 writel(tx_ctrl, base + NvRegTransmitterControl);
6482 /* free all structures */
6483 free_rings(dev);
6484 iounmap(get_hwbase(dev));
6485 pci_release_regions(pci_dev);
6486 pci_disable_device(pci_dev);
6487 free_netdev(dev);
6488 pci_set_drvdata(pci_dev, NULL);
6491 static struct pci_device_id pci_tbl[] = {
6492 { /* nForce Ethernet Controller */
6493 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_1),
6494 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
6495 },
6496 { /* nForce2 Ethernet Controller */
6497 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_2),
6498 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
6499 },
6500 { /* nForce3 Ethernet Controller */
6501 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_3),
6502 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER,
6503 },
6504 { /* nForce3 Ethernet Controller */
6505 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_4),
6506 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
6507 },
6508 { /* nForce3 Ethernet Controller */
6509 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_5),
6510 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
6511 },
6512 { /* nForce3 Ethernet Controller */
6513 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_6),
6514 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
6515 },
6516 { /* nForce3 Ethernet Controller */
6517 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_7),
6518 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM,
6519 },
6520 { /* CK804 Ethernet Controller */
6521 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_8),
6522 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_STATISTICS_V1,
6523 },
6524 { /* CK804 Ethernet Controller */
6525 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_9),
6526 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_STATISTICS_V1,
6527 },
6528 { /* MCP04 Ethernet Controller */
6529 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_10),
6530 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_STATISTICS_V1,
6531 },
6532 { /* MCP04 Ethernet Controller */
6533 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_11),
6534 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_STATISTICS_V1,
6535 },
6536 { /* MCP51 Ethernet Controller */
6537 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_12),
6538 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_STATISTICS_V1,
6539 },
6540 { /* MCP51 Ethernet Controller */
6541 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_13),
6542 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_STATISTICS_V1,
6543 },
6544 { /* MCP55 Ethernet Controller */
6545 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_14),
6546 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_VLAN|DEV_HAS_MSI|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT,
6547 },
6548 { /* MCP55 Ethernet Controller */
6549 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_15),
6550 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_VLAN|DEV_HAS_MSI|DEV_HAS_MSI_X|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT,
6551 },
6552 { /* MCP61 Ethernet Controller */
6553 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_16),
6554 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6555 },
6556 { /* MCP61 Ethernet Controller */
6557 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_17),
6558 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6559 },
6560 { /* MCP61 Ethernet Controller */
6561 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_18),
6562 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6563 },
6564 { /* MCP61 Ethernet Controller */
6565 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_19),
6566 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6567 },
6568 { /* MCP65 Ethernet Controller */
6569 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_20),
6570 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6571 },
6572 { /* MCP65 Ethernet Controller */
6573 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_21),
6574 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6575 },
6576 { /* MCP65 Ethernet Controller */
6577 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_22),
6578 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6579 },
6580 { /* MCP65 Ethernet Controller */
6581 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_23),
6582 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_LARGEDESC|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6583 },
6584 { /* MCP67 Ethernet Controller */
6585 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_24),
6586 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6587 },
6588 { /* MCP67 Ethernet Controller */
6589 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_25),
6590 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6591 },
6592 { /* MCP67 Ethernet Controller */
6593 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_26),
6594 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6595 },
6596 { /* MCP67 Ethernet Controller */
6597 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_27),
6598 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6599 },
6600 { /* MCP73 Ethernet Controller */
6601 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_28),
6602 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6603 },
6604 { /* MCP73 Ethernet Controller */
6605 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_29),
6606 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6607 },
6608 { /* MCP73 Ethernet Controller */
6609 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_30),
6610 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6611 },
6612 { /* MCP73 Ethernet Controller */
6613 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_31),
6614 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_HIGH_DMA|DEV_HAS_POWER_CNTRL|DEV_HAS_MSI|DEV_HAS_PAUSEFRAME_TX_V1|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6615 },
6616 { /* MCP77 Ethernet Controller */
6617 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_32),
6618 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V2|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6619 },
6620 { /* MCP77 Ethernet Controller */
6621 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_33),
6622 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V2|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6623 },
6624 { /* MCP77 Ethernet Controller */
6625 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_34),
6626 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V2|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6627 },
6628 { /* MCP77 Ethernet Controller */
6629 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_35),
6630 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V2|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6631 },
6632 { /* MCP79 Ethernet Controller */
6633 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_36),
6634 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V3|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6635 },
6636 { /* MCP79 Ethernet Controller */
6637 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_37),
6638 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V3|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6639 },
6640 { /* MCP79 Ethernet Controller */
6641 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_38),
6642 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V3|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6643 },
6644 { /* MCP79 Ethernet Controller */
6645 PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NVENET_39),
6646 .driver_data = DEV_NEED_TIMERIRQ|DEV_NEED_LINKTIMER|DEV_HAS_CHECKSUM|DEV_HAS_HIGH_DMA|DEV_HAS_MSI|DEV_HAS_POWER_CNTRL|DEV_HAS_PAUSEFRAME_TX_V3|DEV_HAS_STATISTICS_V2|DEV_HAS_TEST_EXTENDED|DEV_HAS_MGMT_UNIT|DEV_HAS_CORRECT_MACADDR,
6647 },
6648 {0,},
6649 };
6651 #ifdef CONFIG_PM
6652 static void nv_set_low_speed(struct net_device *dev)
6654 struct fe_priv *np = get_nvpriv(dev);
6655 int adv = 0;
6656 int lpa = 0;
6657 int adv_lpa, bmcr, tries = 0;
6658 int mii_status;
6659 u32 control_1000;
6661 if (np->autoneg == 0 || ((np->linkspeed & 0xFFF) != NVREG_LINKSPEED_1000))
6662 return;
6664 adv = mii_rw(dev, np->phyaddr, MII_ADVERTISE, MII_READ);
6665 lpa = mii_rw(dev, np->phyaddr, MII_LPA, MII_READ);
6666 control_1000 = mii_rw(dev, np->phyaddr, MII_CTRL1000, MII_READ);
6668 adv_lpa = lpa & adv;
6670 if ((adv_lpa & LPA_10FULL) || (adv_lpa & LPA_10HALF)) {
6671 adv &= ~(ADVERTISE_100BASE4 | ADVERTISE_100FULL | ADVERTISE_100HALF);
6672 control_1000 &= ~(ADVERTISE_1000FULL|ADVERTISE_1000HALF);
6673 printk(KERN_INFO "forcedeth %s: set low speed to 10mbs\n",dev->name);
6674 } else if ((adv_lpa & LPA_100FULL) || (adv_lpa & LPA_100HALF)) {
6675 control_1000 &= ~(ADVERTISE_1000FULL|ADVERTISE_1000HALF);
6676 } else
6677 return;
6679 /* set new advertisements */
6680 mii_rw(dev, np->phyaddr, MII_ADVERTISE, adv);
6681 mii_rw(dev, np->phyaddr, MII_CTRL1000, control_1000);
6683 bmcr = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ);
6684 if (np->phy_model == PHY_MODEL_MARVELL_E3016) {
6685 bmcr |= BMCR_ANENABLE;
6686 /* reset the phy in order for settings to stick,
6687 * and cause autoneg to start */
6688 if (phy_reset(dev, bmcr)) {
6689 printk(KERN_INFO "%s: phy reset failed\n", dev->name);
6690 return;
6692 } else {
6693 bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
6694 mii_rw(dev, np->phyaddr, MII_BMCR, bmcr);
6696 mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
6697 mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
6698 while (!(mii_status & BMSR_ANEGCOMPLETE)) {
6699 nv_msleep(100);
6700 mii_status = mii_rw(dev, np->phyaddr, MII_BMSR, MII_READ);
6701 if (tries++ > 50)
6702 break;
6705 nv_update_linkspeed(dev);
6707 return;
6710 static int nv_suspend(struct pci_dev *pdev, pm_message_t state)
6712 struct net_device *dev = pci_get_drvdata(pdev);
6713 struct fe_priv *np = get_nvpriv(dev);
6714 u8 __iomem *base = get_hwbase(dev);
6715 int i;
6716 u32 tx_ctrl;
6718 dprintk(KERN_INFO "forcedeth: nv_suspend\n");
6720 /* MCP55:save msix table */
6721 if((pdev->device==PCI_DEVICE_ID_NVIDIA_NVENET_14)||(pdev->device==PCI_DEVICE_ID_NVIDIA_NVENET_15))
6723 unsigned long phys_addr;
6724 void __iomem *base_addr;
6725 void __iomem *base;
6726 unsigned int bir,len;
6727 unsigned int i;
6728 int pos;
6729 u32 table_offset;
6731 pos = pci_find_capability(pdev, PCI_CAP_ID_MSIX);
6732 pci_read_config_dword(pdev, pos+0x04 , &table_offset);
6733 bir = (u8)(table_offset & PCI_MSIX_FLAGS_BIRMASK);
6734 table_offset &= ~PCI_MSIX_FLAGS_BIRMASK;
6735 phys_addr = pci_resource_start(pdev, bir) + table_offset;
6736 np->msix_pa_addr = phys_addr;
6737 len = NV_MSI_X_MAX_VECTORS * PCI_MSIX_ENTRY_SIZE;
6738 base_addr = ioremap_nocache(phys_addr, len);
6740 for(i=0;i<NV_MSI_X_MAX_VECTORS;i++){
6741 base = base_addr + i*PCI_MSIX_ENTRY_SIZE;
6742 np->nvmsg[i].address_lo = readl(base + PCI_MSIX_ENTRY_LOWER_ADDR_OFFSET);
6743 np->nvmsg[i].address_hi = readl(base + PCI_MSIX_ENTRY_UPPER_ADDR_OFFSET );
6744 np->nvmsg[i].data = readl(base + PCI_MSIX_ENTRY_DATA_OFFSET);
6747 iounmap(base_addr);
6750 nv_update_linkspeed(dev);
6752 if (netif_running(dev)) {
6753 netif_device_detach(dev);
6754 /* bring down the adapter */
6755 nv_close(dev);
6758 /* relinquish control of the semaphore */
6759 if (np->mac_in_use){
6760 tx_ctrl = readl(base + NvRegTransmitterControl);
6761 tx_ctrl &= ~NVREG_XMITCTL_HOST_SEMA_MASK;
6762 writel(tx_ctrl, base + NvRegTransmitterControl);
6765 /* set phy to a lower speed to conserve power */
6766 if((lowpowerspeed==NV_LOW_POWER_ENABLED)&&!np->mac_in_use)
6767 nv_set_low_speed(dev);
6769 #if NVVER > RHES4
6770 pci_save_state(pdev);
6771 #else
6772 pci_save_state(pdev,np->pci_state);
6773 #endif
6774 np->saved_nvregphyinterface= readl(base+NvRegPhyInterface);
6775 for(i=0;i<64;i++){
6776 pci_read_config_dword(pdev,i*4,&np->saved_config_space[i]);
6778 #if NVVER > RHES4
6779 pci_enable_wake(pdev, pci_choose_state(pdev, state), np->wolenabled);
6780 #else
6781 pci_enable_wake(pdev, state, np->wolenabled);
6782 #endif
6783 pci_disable_device(pdev);
6785 #if NVVER > RHES4
6786 pci_set_power_state(pdev, pci_choose_state(pdev, state));
6787 #else
6788 pci_set_power_state(pdev, state);
6789 #endif
6791 return 0;
6794 static int nv_resume(struct pci_dev *pdev)
6796 struct net_device *dev = pci_get_drvdata(pdev);
6797 int rc = 0;
6798 struct fe_priv *np = get_nvpriv(dev);
6799 u8 __iomem *base = get_hwbase(dev);
6800 int i;
6801 int err;
6802 u32 txreg;
6804 dprintk(KERN_INFO "forcedeth: nv_resume\n");
6806 pci_set_power_state(pdev, PCI_D0);
6807 #if NVVER > RHES4
6808 pci_restore_state(pdev);
6809 #else
6810 pci_restore_state(pdev,np->pci_state);
6811 #endif
6812 for(i=0;i<64;i++){
6813 pci_write_config_dword(pdev,i*4,np->saved_config_space[i]);
6815 err = pci_enable_device(pdev);
6816 if (err) {
6817 printk(KERN_INFO "forcedeth: pci_enable_dev failed (%d) for device %s\n",
6818 err, pci_name(pdev));
6820 pci_set_master(pdev);
6822 txreg = readl(base + NvRegTransmitPoll);
6823 txreg |= NVREG_TRANSMITPOLL_MAC_ADDR_REV;
6824 writel(txreg, base + NvRegTransmitPoll);
6825 writel(np->saved_nvregphyinterface,base+NvRegPhyInterface);
6826 writel(np->orig_mac[0], base + NvRegMacAddrA);
6827 writel(np->orig_mac[1], base + NvRegMacAddrB);
6829 /* MCP55:restore msix table */
6830 if((pdev->device==PCI_DEVICE_ID_NVIDIA_NVENET_14)||(pdev->device==PCI_DEVICE_ID_NVIDIA_NVENET_15))
6832 unsigned long phys_addr;
6833 void __iomem *base_addr;
6834 void __iomem *base;
6835 unsigned int len;
6836 unsigned int i;
6838 len = NV_MSI_X_MAX_VECTORS * PCI_MSIX_ENTRY_SIZE;
6839 phys_addr = np->msix_pa_addr;
6840 base_addr = ioremap_nocache(phys_addr, len);
6841 for(i=0;i< NV_MSI_X_MAX_VECTORS;i++){
6842 base = base_addr + i*PCI_MSIX_ENTRY_SIZE;
6843 writel(np->nvmsg[i].address_lo,base + PCI_MSIX_ENTRY_LOWER_ADDR_OFFSET);
6844 writel(np->nvmsg[i].address_hi,base + PCI_MSIX_ENTRY_UPPER_ADDR_OFFSET);
6845 writel(np->nvmsg[i].data,base + PCI_MSIX_ENTRY_DATA_OFFSET);
6848 iounmap(base_addr);
6851 if(np->mac_in_use){
6852 /* take control of the semaphore */
6853 for (i = 0; i < 5000; i++) {
6854 if(nv_mgmt_acquire_sema(dev))
6855 break;
6856 nv_msleep(1);
6860 if(lowpowerspeed==NV_LOW_POWER_ENABLED){
6861 /* re-initialize the phy */
6862 phy_init(dev);
6863 udelay(10);
6866 /* bring up the adapter */
6867 if (netif_running(dev)){
6868 rc = nv_open(dev);
6870 netif_device_attach(dev);
6872 return rc;
6875 #endif /* CONFIG_PM */
6876 static struct pci_driver nv_eth_driver = {
6877 .name = "forcedeth",
6878 .id_table = pci_tbl,
6879 .probe = nv_probe,
6880 .remove = __devexit_p(nv_remove),
6881 #ifdef CONFIG_PM
6882 .suspend = nv_suspend,
6883 .resume = nv_resume,
6884 #endif
6885 };
6887 #ifdef CONFIG_PM
6888 static int nv_reboot_handler(struct notifier_block *nb, unsigned long event, void *p)
6890 struct pci_dev *pdev = NULL;
6891 pm_message_t state = { PM_EVENT_SUSPEND };
6893 switch (event)
6895 case SYS_POWER_OFF:
6896 case SYS_HALT:
6897 case SYS_DOWN:
6898 #if NVVER < FEDORA7
6899 while ((pdev = pci_find_device(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, pdev)) != NULL) {
6900 #else
6901 while ((pdev = pci_get_device(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, pdev)) != NULL) {
6902 #endif
6903 if (pci_dev_driver(pdev) == &nv_eth_driver) {
6904 nv_suspend(pdev, state);
6909 return NOTIFY_DONE;
6912 /*
6913 * Reboot notification
6914 */
6915 struct notifier_block nv_reboot_notifier =
6917 notifier_call : nv_reboot_handler,
6918 next : NULL,
6919 priority : 0
6920 };
6921 #endif
6923 static int __init init_nic(void)
6925 int status;
6926 printk(KERN_INFO "forcedeth.c: Reverse Engineered nForce ethernet driver. Version %s.\n", FORCEDETH_VERSION);
6927 DPRINTK(DRV,KERN_DEBUG,"forcedeth:%s\n",DRV_DATE);
6928 #if NVVER > FEDORA7
6929 status = pci_register_driver(&nv_eth_driver);
6930 #else
6931 status = pci_module_init(&nv_eth_driver);
6932 #endif
6933 #ifdef CONFIG_PM
6934 if (status >= 0)
6935 register_reboot_notifier(&nv_reboot_notifier);
6936 #endif
6937 return status;
6940 static void __exit exit_nic(void)
6942 #ifdef CONFIG_PM
6943 unregister_reboot_notifier(&nv_reboot_notifier);
6944 #endif
6945 pci_unregister_driver(&nv_eth_driver);
6948 #if NVVER > SLES9
6949 module_param(debug, int, 0);
6950 module_param(lowpowerspeed, int, 0);
6951 MODULE_PARM_DESC(lowpowerspeed, "Low Power State Link Speed enable by setting to 1 and disabled by setting to 0");
6952 module_param(max_interrupt_work, int, 0);
6953 MODULE_PARM_DESC(max_interrupt_work, "forcedeth maximum events handled per interrupt");
6954 module_param(optimization_mode, int, 0);
6955 MODULE_PARM_DESC(optimization_mode, "In throughput mode (0), every tx & rx packet will generate an interrupt. In CPU mode (1), interrupts are controlled by a timer.");
6956 module_param(poll_interval, int, 0);
6957 MODULE_PARM_DESC(poll_interval, "Interval determines how frequent timer interrupt is generated by [(time_in_micro_secs * 100) / (2^10)]. Min is 0 and Max is 65535.");
6958 module_param(msi, int, 0);
6959 MODULE_PARM_DESC(msi, "MSI interrupts are enabled by setting to 1 and disabled by setting to 0.");
6960 module_param(msix, int, 0);
6961 MODULE_PARM_DESC(msix, "MSIX interrupts are enabled by setting to 1 and disabled by setting to 0.");
6963 module_param(speed_duplex, int, 0);
6964 MODULE_PARM_DESC(speed_duplex, "PHY speed and duplex settings. Auto = 0, 10mbps half = 1, 10mbps full = 2, 100mbps half = 3, 100mbps full = 4, 1000mbps full = 5.");
6965 module_param(autoneg, int, 0);
6966 MODULE_PARM_DESC(autoneg, "PHY autonegotiate is enabled by setting to 1 and disabled by setting to 0.");
6967 module_param(scatter_gather, int, 0);
6968 MODULE_PARM_DESC(scatter_gather, "Scatter gather is enabled by setting to 1 and disabled by setting to 0.");
6969 module_param(tso_offload, int, 0);
6970 MODULE_PARM_DESC(tso_offload, "TCP Segmentation offload is enabled by setting to 1 and disabled by setting to 0.");
6971 module_param(mtu, int, 0);
6972 MODULE_PARM_DESC(mtu, "MTU value. Maximum value of 1500 or 9100 depending on hardware.");
6973 module_param(tx_checksum_offload, int, 0);
6974 MODULE_PARM_DESC(tx_checksum_offload, "Tx checksum offload is enabled by setting to 1 and disabled by setting to 0.");
6975 module_param(rx_checksum_offload, int, 0);
6976 MODULE_PARM_DESC(rx_checksum_offload, "Rx checksum offload is enabled by setting to 1 and disabled by setting to 0.");
6977 module_param(tx_ring_size, int, 0);
6978 MODULE_PARM_DESC(tx_ring_size, "Tx ring size. Maximum value of 1024 or 16384 depending on hardware.");
6979 module_param(rx_ring_size, int, 0);
6980 MODULE_PARM_DESC(rx_ring_size, "Rx ring size. Maximum value of 1024 or 16384 depending on hardware.");
6981 module_param(tx_flow_control, int, 0);
6982 MODULE_PARM_DESC(tx_flow_control, "Tx flow control is enabled by setting to 1 and disabled by setting to 0.");
6983 module_param(rx_flow_control, int, 0);
6984 MODULE_PARM_DESC(rx_flow_control, "Rx flow control is enabled by setting to 1 and disabled by setting to 0.");
6985 module_param(dma_64bit, int, 0);
6986 MODULE_PARM_DESC(dma_64bit, "High DMA is enabled by setting to 1 and disabled by setting to 0.");
6987 module_param(wol, int, 0);
6988 MODULE_PARM_DESC(wol, "Wake-On-Lan is enabled by setting to 1 and disabled by setting to 0.");
6989 module_param(tagging_8021pq, int, 0);
6990 MODULE_PARM_DESC(tagging_8021pq, "802.1pq tagging is enabled by setting to 1 and disabled by setting to 0.");
6991 #else
6992 MODULE_PARM(debug, "i");
6993 MODULE_PARM(lowpowerspeed, "i");
6994 MODULE_PARM_DESC(lowpowerspeed, "Low Power State Link Speed enable by setting to 1 and disabled by setting to 0");
6995 MODULE_PARM(max_interrupt_work, "i");
6996 MODULE_PARM_DESC(max_interrupt_work, "forcedeth maximum events handled per interrupt");
6997 MODULE_PARM(optimization_mode, "i");
6998 MODULE_PARM_DESC(optimization_mode, "In throughput mode (0), every tx & rx packet will generate an interrupt. In CPU mode (1), interrupts are controlled by a timer.");
6999 MODULE_PARM(poll_interval, "i");
7000 MODULE_PARM_DESC(poll_interval, "Interval determines how frequent timer interrupt is generated by [(time_in_micro_secs * 100) / (2^10)]. Min is 0 and Max is 65535.");
7001 #ifdef CONFIG_PCI_MSI
7002 MODULE_PARM(msi, "i");
7003 MODULE_PARM_DESC(msi, "MSI interrupts are enabled by setting to 1 and disabled by setting to 0.");
7004 MODULE_PARM(msix, "i");
7005 MODULE_PARM_DESC(msix, "MSIX interrupts are enabled by setting to 1 and disabled by setting to 0.");
7006 #endif
7007 MODULE_PARM(speed_duplex, "i");
7008 MODULE_PARM_DESC(speed_duplex, "PHY speed and duplex settings. Auto = 0, 10mbps half = 1, 10mbps full = 2, 100mbps half = 3, 100mbps full = 4, 1000mbps full = 5.");
7009 MODULE_PARM(autoneg, "i");
7010 MODULE_PARM_DESC(autoneg, "PHY autonegotiate is enabled by setting to 1 and disabled by setting to 0.");
7011 MODULE_PARM(scatter_gather, "i");
7012 MODULE_PARM_DESC(scatter_gather, "Scatter gather is enabled by setting to 1 and disabled by setting to 0.");
7013 MODULE_PARM(tso_offload, "i");
7014 MODULE_PARM_DESC(tso_offload, "TCP Segmentation offload is enabled by setting to 1 and disabled by setting to 0.");
7015 MODULE_PARM(mtu, "i");
7016 MODULE_PARM_DESC(mtu, "MTU value. Maximum value of 1500 or 9100 depending on hardware.");
7017 MODULE_PARM(tx_checksum_offload, "i");
7018 MODULE_PARM_DESC(tx_checksum_offload, "Tx checksum offload is enabled by setting to 1 and disabled by setting to 0.");
7019 MODULE_PARM(rx_checksum_offload, "i");
7020 MODULE_PARM_DESC(rx_checksum_offload, "Rx checksum offload is enabled by setting to 1 and disabled by setting to 0.");
7021 MODULE_PARM(tx_ring_size, "i");
7022 MODULE_PARM_DESC(tx_ring_size, "Tx ring size. Maximum value of 1024 or 16384 depending on hardware.");
7023 MODULE_PARM(rx_ring_size, "i");
7024 MODULE_PARM_DESC(rx_ring_size, "Rx ring size. Maximum value of 1024 or 16384 depending on hardware.");
7025 MODULE_PARM(tx_flow_control, "i");
7026 MODULE_PARM_DESC(tx_flow_control, "Tx flow control is enabled by setting to 1 and disabled by setting to 0.");
7027 MODULE_PARM(rx_flow_control, "i");
7028 MODULE_PARM_DESC(rx_flow_control, "Rx flow control is enabled by setting to 1 and disabled by setting to 0.");
7029 MODULE_PARM(dma_64bit, "i");
7030 MODULE_PARM_DESC(dma_64bit, "High DMA is enabled by setting to 1 and disabled by setting to 0.");
7031 MODULE_PARM(wol, "i");
7032 MODULE_PARM_DESC(wol, "Wake-On-Lan is enabled by setting to 1 and disabled by setting to 0.");
7033 MODULE_PARM(tagging_8021pq, "i");
7034 MODULE_PARM_DESC(tagging_8021pq, "802.1pq tagging is enabled by setting to 1 and disabled by setting to 0.");
7035 #endif
7036 MODULE_AUTHOR("Manfred Spraul <manfred@colorfullife.com>");
7037 MODULE_DESCRIPTION("Reverse Engineered nForce ethernet driver");
7038 MODULE_LICENSE("GPL");
7039 MODULE_VERSION(FORCEDETH_VERSION);
7041 MODULE_DEVICE_TABLE(pci, pci_tbl);
7043 module_init(init_nic);
7044 module_exit(exit_nic);