Univeral Protocol Driver (using UNDI) in Linux

Univeral Protocol Driver (using UNDI) in Linux

am 07.08.2006 12:39:02 von Daniel Rodrick

Hi list,

I was curious as to why a Universal driver (using UNDI API) for Linux
does not exist (or does it)?

I want to try and write a such a driver that could (in principle)
handle all the NICs that are PXE compatible.

Has this been tried? What are the technical problems that might come in my way?

Thanks,

Dan
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Univeral Protocol Driver (using UNDI) in Linux

am 07.08.2006 17:09:17 von hpa

Daniel Rodrick wrote:
> Hi list,
>
> I was curious as to why a Universal driver (using UNDI API) for Linux
> does not exist (or does it)?
>
> I want to try and write a such a driver that could (in principle)
> handle all the NICs that are PXE compatible.
>
> Has this been tried? What are the technical problems that might come in
> my way?
>

It has been tried; in fact Intel did implement this in their "Linux PXE
SDK". The UNDI API is absolutely atrocious, however, being based on
NDIS2 which is widely considered the worst of all the many network
stacks for DOS.

Additionally, many UNDI stacks don't work correctly when called from
protected mode, since the interface doesn't work right. Additionally,
UNDI is *ONLY* available after booting from the NIC.

-hpa
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Univeral Protocol Driver (using UNDI) in Linux

am 07.08.2006 18:11:23 von Daniel Rodrick

On 8/7/06, H. Peter Anvin wrote:
> Daniel Rodrick wrote:
> > Hi list,
> >
> > I was curious as to why a Universal driver (using UNDI API) for Linux
> > does not exist (or does it)?
> >
> > I want to try and write a such a driver that could (in principle)
> > handle all the NICs that are PXE compatible.
> >
> > Has this been tried? What are the technical problems that might come in
> > my way?
> >
>
> It has been tried; in fact Intel did implement this in their "Linux PXE
> SDK". The UNDI API is absolutely atrocious, however, being based on
> NDIS2 which is widely considered the worst of all the many network
> stacks for DOS.
>
> Additionally, many UNDI stacks don't work correctly when called from
> protected mode, since the interface doesn't work right. Additionally,
> UNDI is *ONLY* available after booting from the NIC.
>

Agreed. But still having a single driver for all the NICs would be
simply GREAT for my setup, in which all the PCs will be booted using
PXE only. So apart from performance / relilability issues, what are
the technical roadblocks in this?

I'm sure having a single driver for all the NICs is a feature cool
enough to die for. Yes, it might have drawbacks like just pointed out
by Peter, but surely a "single driver for all NIC" feature could prove
to be great in some systems.

But since it does not already exist in the kernel, there must be some
technical feasibility isse. Any ideas on this?

And any Idea where can I find the Intel's PXE SDK for Linux? I googled
for an hour but to no avail. Texts suggest that it was available on
Intel Architecture's Lab website which is non existent now. I would
appreciate if some one could send me the PXE SDk from Intel, if
somebody still has a copy.

Thanks,

Dan

Re: Univeral Protocol Driver (using UNDI) in Linux

am 07.08.2006 18:49:42 von hpa

Daniel Rodrick wrote:
>
> Agreed. But still having a single driver for all the NICs would be
> simply GREAT for my setup, in which all the PCs will be booted using
> PXE only. So apart from performance / relilability issues, what are
> the technical roadblocks in this?
>
> I'm sure having a single driver for all the NICs is a feature cool
> enough to die for. Yes, it might have drawbacks like just pointed out
> by Peter, but surely a "single driver for all NIC" feature could prove
> to be great in some systems.
>

Assuming it works, which is questionable in my opinion.

> But since it does not already exist in the kernel, there must be some
> technical feasibility isse. Any ideas on this?

No, that's not the reason. The Intel code was ugly, and the limitations
made other people not want to spend any time hacking on it.

-hpa

-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Univeral Protocol Driver (using UNDI) in Linux

am 07.08.2006 20:00:49 von Jan Engelhardt

>
> Agreed. But still having a single driver for all the NICs would be
> simply GREAT for my setup, in which all the PCs will be booted using
> PXE only. So apart from performance / relilability issues, what are
> the technical roadblocks in this?

Netboot, in the current world, could be done like this:

1. Grab the PXE ROM code chip manufacturers offer in case your network card
does not support booting via PXE yet and write it to an EPROM which most
PCI network cards have a socket for

2. Use PXELINUX, boot that with the help of the PXE ROM code

3. Put all drivers needed into the kernel or initrd; or send out different
initrds depending on the DHCP info the PXE client sent.


Jan Engelhardt
--
-
To unsubscribe from this list: send the line "unsubscribe linux-newbie" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.linux-learn.org/faqs

Re: Univeral Protocol Driver (using UNDI) in Linux

am 07.08.2006 20:15:19 von hpa

Jan Engelhardt wrote:
>> Agreed. But still having a single driver for all the NICs would be
>> simply GREAT for my setup, in which all the PCs will be booted using
>> PXE only. So apart from performance / relilability issues, what are
>> the technical roadblocks in this?
>
> Netboot, in the current world, could be done like this:
>
> 1. Grab the PXE ROM code chip manufacturers offer in case your network card
> does not support booting via PXE yet and write it to an EPROM which most
> PCI network cards have a socket for
>
> 2. Use PXELINUX, boot that with the help of the PXE ROM code
>
> 3. Put all drivers needed into the kernel or initrd; or send out different
> initrds depending on the DHCP info the PXE client sent.
>

There is a program called "ethersel" included with PXELINUX which can be
used to send out different initrds depending on an enumeration of PCI space.

-hpa
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Univeral Protocol Driver (using UNDI) in Linux

am 08.08.2006 07:13:13 von Daniel Rodrick

> >
> > I'm sure having a single driver for all the NICs is a feature cool
> > enough to die for. Yes, it might have drawbacks like just pointed out
> > by Peter, but surely a "single driver for all NIC" feature could prove
> > to be great in some systems.
> >
>
> Assuming it works, which is questionable in my opinion.
>
> > But since it does not already exist in the kernel, there must be some
> > technical feasibility isse. Any ideas on this?
>
> No, that's not the reason. The Intel code was ugly, and the limitations
> made other people not want to spend any time hacking on it.
>

Hi ... so there seem to be no technical feasibily issues, just
reliabliy / ugly design issues? So one can still go ahead and write a
Universal Protocol Driver that can work with all (PXE compatible)
NICs?

Are there any issues related to real mode / protected mode?

Thanks,

Dan
-
To unsubscribe from this list: send the line "unsubscribe linux-newbie" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.linux-learn.org/faqs

Re: Univeral Protocol Driver (using UNDI) in Linux

am 08.08.2006 17:04:47 von Alan Shieh

unfoDaniel Rodrick wrote:
>> >
>> > I'm sure having a single driver for all the NICs is a feature cool
>> > enough to die for. Yes, it might have drawbacks like just pointed out
>> > by Peter, but surely a "single driver for all NIC" feature could prove
>> > to be great in some systems.
>> >
>>
>> Assuming it works, which is questionable in my opinion.
>>
>> > But since it does not already exist in the kernel, there must be some
>> > technical feasibility isse. Any ideas on this?
>>
>> No, that's not the reason. The Intel code was ugly, and the limitations
>> made other people not want to spend any time hacking on it.
>>
>
> Hi ... so there seem to be no technical feasibily issues, just
> reliabliy / ugly design issues? So one can still go ahead and write a
> Universal Protocol Driver that can work with all (PXE compatible)
> NICs?

With help from the Etherboot Project, I've recently implemented such a
driver for Etherboot 5.4. It currently supports PIO NICs (e.g. cards
that use in*/out* to interface with CPU). It's currently available in a
branch, and will be merged into the trunk by the Etherboot project. It
works reliably with QEMU + PXELINUX, with the virtual ne2k-pci NIC.

Barring unforseen issues, I should get MMIO to work soon; target
platform would be pcnet32 or e1000 on VMware, booted with PXELINUX.

I doubt the viability of implementing an UNDI driver that works with all
PXE stacks in existence without dirty tricks, as PXE / UNDI lack some
important functionality; I'll summarize some of the issues below.

> Are there any issues related to real mode / protected mode?

Yes, lots.

At minimum, one needs to be able to probe for !PXE presence, which means
you need to map in 0-1MB of physical memory. The PXE stack's memory also
needs to be mapped in. My UNDI driver relies on a kernel module, generic
across all NICs, to accomplish these by mapping in the !PXE probe area
and PXE memory in a user process.

The PXE specification is very hazy about protected mode operation. In
principal, PXE stacks are supposed to support UNDI invocation from
either 16:16 or 16:32 protected mode. I doubt the average stack was
ever extensively tested from 32-bit code, as the vast majority of UNDI
clients execute in real mode or V8086 mode.

I encountered several some protected mode issues with Etherboot, some of
which most likely come up in manufacturer PXE stacks due to similar
historical influences. I happen to prefer Etherboot stacks over
manufacturer stacks -- as its OSS, you can fix the bugs, and change the
UI / timeouts, etc. to work better for your environment:

* Some code did not execute cleanly in CPL3 (e.g., the console print
code issues privileged instructions)
* Far pointers interpreted as Real mode, rather than 16:16 protected
mode. This doesn't matter for real mode execution. Many occurances of
far pointers in the PXE spec are a bad idea, see below.
* Etherboot keeps itself resident in memory after kernel loads, and
munges the E820 map to discourage the kernel from messing with it. Other
stacks may not be so smart.

I was able to fix these minor problems with Etherboot with relative ease
because I had the source. I could also extend the PXE interface and
conveniently ignore idiosyncracies in the PXE spec.

This is harder with a closed-source stack. In my opinion, the best way
to reliably run a closed-source, manufacturer PXE stack would be to suck
it into a virtualization process, like QEMU, virtualize away all of the
ugly CPL0 and real-mode liberties taken by the code, and talk to it with
a small real-mode stub within the VM. The virtualization would need to
shunt PIO and MMIO to the real hardware, and maybe re-export PCI
configuration space.

( An even easier way would be to axe the manufacturer's stack and use
Etherboot instead )

PXE is also missing some functionality. Some of this can be worked
around with ugly hacks, others are a deal-breaker.

* PXE reports its memory segments using a maximum of 16 bits for segment
length. The segment length is necessary for loading PXE stack into
virtual address space.

It is possible to work around this with an E820 trick -- find the
segment base, which is 32 bit, and then find a E820 hole that contains
it. Unfortunately, e820map is not exported to Linux modules by the kernel.

16-bits is a really tiny amount of memory, especially if one wants to
use the same segments for code & data. Etherboot easily overflows this
when uncompressed.

* UNDI transmit descriptors point to packet chunks 16:16 pointers. This
is fine if the PXE stack will copy data from those chunks into a fixed
packet buffer. There are several ways for things to go horribly wrong if
the stack attempts to DMA (I believe it is actually impossible to do
this correctly from CPL3 if arbitrary segmentation & paging are in
effect). There is a provision in my copy of the UNDI spec for physical
addresses, which is a lot more robust in a protected mode world, but the
spec says "not supported in this version of PXE", which means a lot of
stacks you encounter in the wild will not support this.

This can be worked around by making conservative assumptions, e.g.
ensuring that virtual address == physical address for all regions you
pass down to the card, and the selector you use happens to emulate the
semantics of a real-mode pointer.

* PXE will need access to the PCI memory mapped region for MMIO NICs.
(With PIO, you just need to set IOPL(3) to give your driver process free
reign over I/O port space). The physical addresses of the PCI devices
are usually really high up in memory. The Linux kernel takes over this
address space, as it owns everything from 0xc0000000 and up, and most,
if not all, of this address range needs to be invariant between
different process address spaces.

With the help of the kernel module, the user process can just remap
those physical addresses to a different area. Unfortunately, PXE doesn't
have an interface for reprogramming it with the new MMIO location.
--
Alan Shieh
PhD Student, Computer Science
331 Upson Hall
Ithaca, NY 14853

Re: Univeral Protocol Driver (using UNDI) in Linux

am 08.08.2006 18:35:29 von hpa

Daniel Rodrick wrote:
>
> Hi ... so there seem to be no technical feasibily issues, just
> reliabliy / ugly design issues? So one can still go ahead and write a
> Universal Protocol Driver that can work with all (PXE compatible)
> NICs?
>

For some definition of "all" = meaning "it might work on some of them."

> Are there any issues related to real mode / protected mode?

Yes, a whole bunch of PXE/UNDI stacks are broken in protected mode.

-hpa
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Univeral Protocol Driver (using UNDI) in Linux

am 10.08.2006 10:18:14 von Daniel Rodrick

Hello Alan,

First things first. Tons of thanks for a mail FULLY LOADED with useful
information. I'm still to digest it all, but really thanks.


> >> >
> >> > I'm sure having a single driver for all the NICs is a feature cool
> >> > enough to die for. Yes, it might have drawbacks like just pointed out
> >> > by Peter, but surely a "single driver for all NIC" feature could prove
> >> > to be great in some systems.
> >> >
[Snip]
> >
> > Hi ... so there seem to be no technical feasibily issues, just
> > reliabliy / ugly design issues? So one can still go ahead and write a
> > Universal Protocol Driver that can work with all (PXE compatible)
> > NICs?
>
> With help from the Etherboot Project, I've recently implemented such a
> driver for Etherboot 5.4. It currently supports PIO NICs (e.g. cards
> that use in*/out* to interface with CPU). It's currently available in a
> branch, and will be merged into the trunk by the Etherboot project. It
> works reliably with QEMU + PXELINUX, with the virtual ne2k-pci NIC.

Umm ... pardon me if I am wrong, but I think you implemented a "UNDI
Driver" (i.e. the code that provides implementation of UNDI API, and
often resides in the NIC ROM) . I'm looking forward to write a
"Universal Protocol Driver" (i.e. the code that will be a linux kernel
module and will, use the UNDI API provided by your UNDI driver).

But nevertheless your information has been *VERY* helpful ...

> At minimum, one needs to be able to probe for !PXE presence, which means
> you need to map in 0-1MB of physical memory. The PXE stack's memory also
> needs to be mapped in. My UNDI driver relies on a kernel module, generic
> across all NICs, to accomplish these by mapping in the !PXE probe area
> and PXE memory in a user process.

I'm pretty newbie to PXE, but I I think !PXE structure is used to find
out the location & size of PXE & UNDI runtime routines, by UNIVERSAL
PROTOCOL DRIVERS. Is my understanding wrong?

Also, I think that UNDI driver routine will need not call PXE routines
(TFTP / DHCP etc) as UNDI routines would be at a lower level providing
access to the bare bones hardware. Is this correct?

Thanks,

Dan
-
To unsubscribe from this list: send the line "unsubscribe linux-newbie" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.linux-learn.org/faqs

Re: Univeral Protocol Driver (using UNDI) in Linux

am 10.08.2006 22:59:06 von Donald Becker

On Tue, 8 Aug 2006, Alan Shieh wrote:

> With help from the Etherboot Project, I've recently implemented such a
> driver for Etherboot 5.4. It currently supports PIO NICs (e.g. cards
> that use in*/out* to interface with CPU). It's currently available in a
> branch, and will be merged into the trunk by the Etherboot project. It
> works reliably with QEMU + PXELINUX, with the virtual ne2k-pci NIC.
>
> Barring unforseen issues, I should get MMIO to work soon; target
> platform would be pcnet32 or e1000 on VMware, booted with PXELINUX.

Addressing a very narrow terminology issue:

I think that there a misuse of terms here. PIO, or Programmed I/O, is
data transfer mode where the CPU explicitly passes data to and from a
device through a single register location.

The term is unrelated to the device being addressed in I/O or memory
space. On the x86, hardware devices were usually in the I/O address space
and there are special "string" I/O instructions, so PIO and I/O space were
usually paired. But processors without I/O instructions could still do
PIO transfers.

Memory Mapped I/O, MMIO, is putting the device registers in memory space.
In the past it specifically meant sharing memory on the device e.g. when
the packet buffer memory on a WD8013 NIC was jumpered into the ISA address
space below 1MB. The CPU still controls the data transfer, generating a
write for each word transfered. But the address changed with each write,
just like a memcpy.

Today "MMIO" is sometimes misused to mean simply using a PCI memory
mapping rather than a PCI I/O space mapping. PCI actually has three
address spaces: memory, I/O and config spaces. Many devices allow
accessing the operational registers either through I/O or memory spaces.

Both PIO and MMIO are obsolescent. You'll still find it in legacy
interfaces, such as IDE disk controllers where it is only used during boot
or when running DOS. Instead bulk data is moved using bus master
transfers, where the device itself generates read or write
bus transactions.

Specifically, you won't find PIO or MMIO on any current NIC. They only
existed on a few early PCI NICs, circa 1995, where an ISA design was
reworked.

So what does all of this have to with UNDI (and, not coincidentally,
virtualization)? There are a bunch of problems with using UNDI drivers,
but we only need one unsolvable one to make it a doomed effort. It's a
big challenge to limit, control and remap how the UNDI driver code talks
to the hardware. That seems to be focus above -- just dealing with how to
control access to the device registers. But even if we do that correctly
, the driver is still setting up the NIC to be a bus master. The device
hardware will be reading and writing directly to memory, and the driver
has to make a bunch of assumptions when calculating those addresses.

> ( An even easier way would be to axe the manufacturer's stack and use
> Etherboot instead )

People are hoping to magically get driver support for all hardware
without writing drivers. If it were as easy as writing a Linux-to-UNDI
shim, that shim would have been written long ago. UNDI doesn't come
close to being an OS-independent driver interface, even if you are willing
to accept the big performance hit.


--
Donald Becker becker@scyld.com
Scyld Software Scyld Beowulf cluster systems
914 Bay Ridge Road, Suite 220 www.scyld.com
Annapolis MD 21403 410-990-9993

-
To unsubscribe from this list: send the line "unsubscribe linux-newbie" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.linux-learn.org/faqs

Re: Univeral Protocol Driver (using UNDI) in Linux

am 10.08.2006 23:08:13 von hpa

Donald Becker wrote:
> On Tue, 8 Aug 2006, Alan Shieh wrote:
>
>> With help from the Etherboot Project, I've recently implemented such a
>> driver for Etherboot 5.4. It currently supports PIO NICs (e.g. cards
>> that use in*/out* to interface with CPU). It's currently available in a
>> branch, and will be merged into the trunk by the Etherboot project. It
>> works reliably with QEMU + PXELINUX, with the virtual ne2k-pci NIC.
>>
>> Barring unforseen issues, I should get MMIO to work soon; target
>> platform would be pcnet32 or e1000 on VMware, booted with PXELINUX.
>
> Addressing a very narrow terminology issue:
>
> I think that there a misuse of terms here. PIO, or Programmed I/O, is
> data transfer mode where the CPU explicitly passes data to and from a
> device through a single register location.
>
> The term is unrelated to the device being addressed in I/O or memory
> space. On the x86, hardware devices were usually in the I/O address space
> and there are special "string" I/O instructions, so PIO and I/O space were
> usually paired. But processors without I/O instructions could still do
> PIO transfers.
>
> Memory Mapped I/O, MMIO, is putting the device registers in memory space.
> In the past it specifically meant sharing memory on the device e.g. when
> the packet buffer memory on a WD8013 NIC was jumpered into the ISA address
> space below 1MB. The CPU still controls the data transfer, generating a
> write for each word transfered. But the address changed with each write,
> just like a memcpy.
>

Basically:

PIO = CPU initiates transfers
DMA = device initiates transfers(*)
IOIO = I/O mapped PIO
MMIO = Memory mapped PIO


> Today "MMIO" is sometimes misused to mean simply using a PCI memory
> mapping rather than a PCI I/O space mapping. PCI actually has three
> address spaces: memory, I/O and config spaces. Many devices allow
> accessing the operational registers either through I/O or memory spaces.

Why is that a misuse?

-hpa


(*) There are actually two kinds of DMA, bus mastering DMA, in which the
device itself initiates transfers, and slave DMA, in which a third agent
(the DMA controller) initiates transfers. In modern PC systems slave
DMA is extremely slow and is only used for legacy devices, pretty much
the floppy disk and nothing else, but some other architectures have
modern implementations and slave DMA is widely used.
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Univeral Protocol Driver (using UNDI) in Linux

am 11.08.2006 22:01:01 von Alan Shieh

> Umm ... pardon me if I am wrong, but I think you implemented a "UNDI
> Driver" (i.e. the code that provides implementation of UNDI API, and
> often resides in the NIC ROM) . I'm looking forward to write a
> "Universal Protocol Driver" (i.e. the code that will be a linux kernel
> module and will, use the UNDI API provided by your UNDI driver).

I wrote a universal protocol driver that runs in Linux, and talks to an
extended UNDI stack implemented in Etherboot.

>> At minimum, one needs to be able to probe for !PXE presence, which means
>> you need to map in 0-1MB of physical memory. The PXE stack's memory also
>> needs to be mapped in. My UNDI driver relies on a kernel module, generic
>> across all NICs, to accomplish these by mapping in the !PXE probe area
>> and PXE memory in a user process.
>
>
> I'm pretty newbie to PXE, but I I think !PXE structure is used to find
> out the location & size of PXE & UNDI runtime routines, by UNIVERSAL
> PROTOCOL DRIVERS. Is my understanding wrong?

That's right.

> Also, I think that UNDI driver routine will need not call PXE routines
> (TFTP / DHCP etc) as UNDI routines would be at a lower level providing
> access to the bare bones hardware. Is this correct?

I'm calling the UNDI level routines (packet send, interrupt handling)
from my driver.

Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Univeral Protocol Driver (using UNDI) in Linux

am 11.08.2006 22:48:23 von Alan Shieh

Donald Becker wrote:
> On Tue, 8 Aug 2006, Alan Shieh wrote:
>
>
>>With help from the Etherboot Project, I've recently implemented such a
>>driver for Etherboot 5.4. It currently supports PIO NICs (e.g. cards
>>that use in*/out* to interface with CPU). It's currently available in a
>>branch, and will be merged into the trunk by the Etherboot project. It
>>works reliably with QEMU + PXELINUX, with the virtual ne2k-pci NIC.
>>
>>Barring unforseen issues, I should get MMIO to work soon; target
>>platform would be pcnet32 or e1000 on VMware, booted with PXELINUX.
>
>
> Addressing a very narrow terminology issue:



Thanks for the terminology clarification. I'll try to clarify the
hardware scenarios.

Currently, cards that use the I/O port space ought to work, so long as
their Etherboot drivers are well-behaved. Those that use PIO (like the
QEMU's NE2K-PCI, which looks like NE2K bolted onto a PCI/ISA bridge)
ought to work 100%. Those that use I/O ports to set up DMA transfers may
or may not work depending on whether they try to DMA directly into
pointers specified by UNDI calls.

I am in the process of implementing a way to pass the virtual address of
each memory mapped I/O region described in the PCI BARs; once this
works, the same caveats as for I/O port controlled

> So what does all of this have to with UNDI (and, not coincidentally,
> virtualization)? There are a bunch of problems with using UNDI drivers,
> but we only need one unsolvable one to make it a doomed effort. It's a
> big challenge to limit, control and remap how the UNDI driver code talks
> to the hardware. That seems to be focus above -- just dealing with how to
> control access to the device registers. But even if we do that correctly
> , the driver is still setting up the NIC to be a bus master. The device
> hardware will be reading and writing directly to memory, and the driver
> has to make a bunch of assumptions when calculating those addresses.

This is true. I had to verify that the Etherboot UNDI code performs
copies to/from its memory (which it knows the precise location of),
rather than DMAing to pointers.

I think the virtualization approach is workable for stacks that hide
themselves properly in E820 maps (not specified in the PXE spec), but
somewhat complicated. It would have to either use a IOMMU (afaik not
present on Pacifica, VT, or LT), or enforce a 1:1 correspondence between
the emulated physical pages and actual physical pages that are provided
to the stack.

The advantage of this over downloading an Etherboot stack is that it
slightly simplifies deployment -- there is no need to install a
Etherboot option ROM or boot image on each node. I believe the code
complexity, development, and testing costs are higher than they are for
our current approach, where we get to control and fix the firmware.

> People are hoping to magically get driver support for all hardware
> without writing drivers. If it were as easy as writing a Linux-to-UNDI
> shim, that shim would have been written long ago. UNDI doesn't come
> close to being an OS-independent driver interface, even if you are willing
> to accept the big performance hit.

I agree. The PXE specification is woefully underspecified and clunky for
protected mode operation. Performance is on the order of broadband speed
-- I'm getting 200-250KB/s using polling (as opposed to interrupt-driven
I/O), with QEMU running on a 2GHz Athlon.

We still need drivers in the Etherboot-based strategy -- they live in
the Etherboot layer. Etherboot drivers would be checked for conformance
with protected mode operation.

Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-newbie" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.linux-learn.org/faqs

RE: Univeral Protocol Driver (using UNDI) in Linux

am 28.09.2006 11:59:10 von Deepak Gupta

Hi Alan & others,

I am a newbie in field of PXE. I went through the entire thread. An
interesting discussion.

Well I got a chance to hands on Universal Protocol Driver itself. I am
trying to implement the same. The objective of this driver is same as
suggested by Daniel.

Still I am mentioning what we are trying here: -

1. Write a minimal Network driver, which will run on kernel and use only
PXE API for sending/receiving the packets over the network.

2. The driver will provide only basic functionalities like just send and
receive the packets. NO performance talks please.

3. Please note that this driver is not intended to use in network
booting task.

4. So logically this driver will work on all the NICs which compliance
with PXE specifications. Our present target is PXE 2.1 specification.

With the above mentioned information, at present we are able to achieve
following: -

1. Identification of !PXE structure and API calling is OK.

2. Sending packet also OK.

3. We are facing issues in receive packet part. Specially interrupt is
not received on irq line (returned by PXE API
PXENV_UNDI_GET_INFORMATION).

With all this information, please find my replies inline: -

> With help from the Etherboot Project, I've recently implemented such a

> driver for Etherboot 5.4. It currently supports PIO NICs (e.g. cards
> that use in*/out* to interface with CPU). It's currently available in
a
> branch, and will be merged into the trunk by the Etherboot project. It

> works reliably with QEMU + PXELINUX, with the virtual ne2k-pci NIC.

> Barring unforseen issues, I should get MMIO to work soon; target
> platform would be pcnet32 or e1000 on VMware, booted with PXELINUX.

> I doubt the viability of implementing an UNDI driver that works with
all
> PXE stacks in existence without dirty tricks, as PXE / UNDI lack some
> important functionality; I'll summarize some of the issues below.

I am sorry but I am not clear regarding this, do you want to say PXE
specification is lacking some functionality ?

> > Are there any issues related to real mode / protected mode?

> Yes, lots.

> At minimum, one needs to be able to probe for !PXE presence, which
means
> you need to map in 0-1MB of physical memory. The PXE stack's memory
also
> needs to be mapped in. My UNDI driver relies on a kernel module,
generic
> across all NICs, to accomplish these by mapping in the !PXE probe area

> and PXE memory in a user process.

> The PXE specification is very hazy about protected mode operation. In
> principal, PXE stacks are supposed to support UNDI invocation from
> either 16:16 or 16:32 protected mode. I doubt the average stack was
> ever extensively tested from 32-bit code, as the vast majority of UNDI

> clients execute in real mode or V8086 mode.

I have used 16:16 only, and so far I think it is OK to use. Have you
found any problem in 16:16? If yes please let us know. I have used it in
protected mode only. And yes I have not tested on too many NICs. So my
knowledge here is very limited.

> I encountered several some protected mode issues with Etherboot, some
of
> which most likely come up in manufacturer PXE stacks due to similar
> historical influences. I happen to prefer Etherboot stacks over
> manufacturer stacks -- as its OSS, you can fix the bugs, and change
the
> UI / timeouts, etc. to work better for your environment:



> * UNDI transmit descriptors point to packet chunks 16:16 pointers.
This
> is fine if the PXE stack will copy data from those chunks into a fixed

> packet buffer. There are several ways for things to go horribly wrong
if
> the stack attempts to DMA (I believe it is actually impossible to do
> this correctly from CPL3 if arbitrary segmentation & paging are in
> effect). There is a provision in my copy of the UNDI spec for physical

> addresses, which is a lot more robust in a protected mode world, but
the
> spec says "not supported in this version of PXE", which means a lot of

> stacks you encounter in the wild will not support this.

primary test of packet send are OK with us. Yes I am not planning to use
DMA like things.


Well having all the useful info from your e-mail we just want to know,
do you think that it is technically not possible to have such driver.

Please provide your valuable feedback and comments on same.

Best regards,
Deepak Kumar Gupta

-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html