mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware > GPU Computing

Reply
 
Thread Tools
Old 2016-05-22, 05:24   #45
anonymous
 

3×13×61 Posts
Default

After posting my earlier message, I just wondered if you haven't actually taken it apart to the extent required to see the iPass circuit boards? At first I thought you had disassembled the whole thing, but a glance at the manual suggests that you might be able to see the info you gave me just by removing the fans. In that case, I don't want to trouble you to have to take everything out just to see the iPass boards! The information probably wouldn't help that much anyway.

The strange thing here is that my board serial number is only 19 lower than yours, so they were probably part of the same batch. Very odd that it doesn't work on mine! I might try opening it tomorrow and checking that the jumpers match yours. It also looks like there is a CMOS battery on the board from other photos I have seen - I might take that out for a couple of minutes in case it resets something.

One other thing: what is your firmware version, as displayed in the web interface?

With that mind, have you considered a firmware update? Of course an upgrade is risky though - it takes several minutes and any failure could leave you needing to replace the main board, but perhaps it might help to fix the cards in the slots that don't work. However, I would be very surprised if it helped with the I/O address issue that you are facing with getting the eighth card to work.
Also, did you try removing and re-inserting (with power off) the affected GPU cards that don't work, just in case they are not seated correctly?

I looked at the lspci -tv output and was surprised by the result, so my most updated post about the order or remove/rescan stands.
For further progress, setpci might be the answer. I haven't used it before, but more or less know what it does - the aim is to explicitly reallocate the i/o ports that could not be allocated initially, now that they have been freed. Please can you send the output from:
lspci -vv -s 10:00.0
lspci -vv -s 11:00.0
Assuming that the 10:00.0 card is still the one that doesn't work when you next boot. The idea is just to compare the output for working and non-working cards in order to identify the ports that weren't allocated, although I am not certain how well this will work, given that the other ports have been released anyway. It might also be useful to see:
dmesg | grep 11:00.0
dmesg | grep 10:00.0
  Reply With Quote
Old 2016-05-23, 00:44   #46
bgbeuning
 
Dec 2014

3×5×17 Posts
Default

web firmware version is 1.25

Not using sudo leaves the capabilities <no access>

Quote:
sudo lspci -vv -s 10:00.0
10:00.0 3D controller: NVIDIA Corporation GF100GL [Tesla M2070] (rev a3)
Subsystem: NVIDIA Corporation Tesla M2070
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 256 bytes
Interrupt: pin A routed to IRQ 18
Region 0: Memory at bc000000 (32-bit, non-prefetchable) [size=32M]
Region 1: Memory at 48000000 (64-bit, prefetchable) [size=128M]
Region 3: Memory at 50000000 (64-bit, prefetchable) [size=64M]
Region 5: I/O ports at 0000
Expansion ROM at <ignored> [disabled]
Capabilities: [60] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Address: 00000000fee00000 Data: 40c9
Capabilities: [78] Express (v1) Endpoint, MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 <64us
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #8, Speed 2.5GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <1us, L1 <4us
ClockPM+ Surprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; RCB 128 bytes Disabled- CommClk-
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
Capabilities: [b4] Vendor Specific Information: Len=14 <?>
Capabilities: [100 v1] Virtual Channel
Caps: LPEVC=0 RefClk=100ns PATEntryBits=1
Arb: Fixed- WRR32- WRR64- WRR128-
Ctrl: ArbSelect=Fixed
Status: InProgress-
VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
Status: NegoPending- InProgress-
Capabilities: [128 v1] Power Budgeting <?>
Capabilities: [600 v1] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
Kernel driver in use: nvidia
Quote:
sudo lspci -vv -s 11:00.0
[sudo] password for bgb:
11:00.0 3D controller: NVIDIA Corporation GF100GL [Tesla M2070] (rev a3)
Subsystem: NVIDIA Corporation Tesla M2070
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 18
Region 0: Memory at c0000000 (32-bit, non-prefetchable) [size=32M]
Region 1: Memory at 58000000 (64-bit, prefetchable) [size=128M]
Region 3: Memory at 54000000 (64-bit, prefetchable) [size=64M]
Region 5: I/O ports at 0000
Capabilities: [60] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
Address: 00000000fee00000 Data: 40d9
Capabilities: [78] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 <64us
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #16, Speed 2.5GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <1us, L1 <4us
ClockPM+ Surprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; RCB 128 bytes Disabled- CommClk-
ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR-, OBFF Not Supported
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance De-emphasis: -6dB
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
Capabilities: [b4] Vendor Specific Information: Len=14 <?>
Capabilities: [100 v1] Virtual Channel
Caps: LPEVC=0 RefClk=100ns PATEntryBits=1
Arb: Fixed- WRR32- WRR64- WRR128-
Ctrl: ArbSelect=Fixed
Status: InProgress-
VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
Status: NegoPending- InProgress-
Capabilities: [128 v1] Power Budgeting <?>
Capabilities: [600 v1] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
Kernel driver in use: nvidia
Quote:
dmesg | grep 10:00.0
[ 0.641524] pci 0000:10:00.0: [10de:06d2] type 00 class 0x030200
[ 0.641549] pci 0000:10:00.0: reg 0x10: [mem 0xc6000000-0xc7ffffff]
[ 0.641571] pci 0000:10:00.0: reg 0x14: [mem 0x50000000-0x57ffffff 64bit pref]
[ 0.641594] pci 0000:10:00.0: reg 0x1c: [mem 0x4c000000-0x4fffffff 64bit pref]
[ 0.641608] pci 0000:10:00.0: reg 0x24: [io 0x7c00-0x7c7f]
[ 0.641623] pci 0000:10:00.0: reg 0x30: [mem 0xc5f80000-0xc5ffffff pref]
[ 0.735359] pci 0000:10:00.0: BAR 1: no space for [mem size 0x08000000 64bit pref]
[ 0.735361] pci 0000:10:00.0: BAR 1: failed to assign [mem size 0x08000000 64bit pref]
[ 0.735363] pci 0000:10:00.0: BAR 3: no space for [mem size 0x04000000 64bit pref]
[ 0.735364] pci 0000:10:00.0: BAR 3: failed to assign [mem size 0x04000000 64bit pref]
[ 0.735366] pci 0000:10:00.0: BAR 0: no space for [mem size 0x02000000]
[ 0.735368] pci 0000:10:00.0: BAR 0: failed to assign [mem size 0x02000000]
[ 0.735370] pci 0000:10:00.0: BAR 6: no space for [mem size 0x00080000 pref]
[ 0.735372] pci 0000:10:00.0: BAR 6: failed to assign [mem size 0x00080000 pref]
[ 0.735377] pci 0000:10:00.0: BAR 5: no space for [io size 0x0080]
[ 0.735379] pci 0000:10:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.736168] pci 0000:10:00.0: BAR 1: assigned [mem 0x48000000-0x4fffffff 64bit pref]
[ 0.736183] pci 0000:10:00.0: BAR 3: assigned [mem 0x50000000-0x53ffffff 64bit pref]
[ 0.736197] pci 0000:10:00.0: BAR 0: no space for [mem size 0x02000000]
[ 0.736199] pci 0000:10:00.0: BAR 0: failed to assign [mem size 0x02000000]
[ 0.736205] pci 0000:10:00.0: BAR 5: no space for [io size 0x0080]
[ 0.736206] pci 0000:10:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.737085] pci 0000:10:00.0: BAR 0: no space for [mem size 0x02000000]
[ 0.737087] pci 0000:10:00.0: BAR 0: failed to assign [mem size 0x02000000]
[ 0.737092] pci 0000:10:00.0: BAR 5: no space for [io size 0x0080]
[ 0.737094] pci 0000:10:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.737893] pci 0000:10:00.0: BAR 0: no space for [mem size 0x02000000]
[ 0.737895] pci 0000:10:00.0: BAR 0: failed to assign [mem size 0x02000000]
[ 0.737900] pci 0000:10:00.0: BAR 5: no space for [io size 0x0080]
[ 0.737902] pci 0000:10:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.738686] pci 0000:10:00.0: BAR 0: no space for [mem size 0x02000000]
[ 0.738688] pci 0000:10:00.0: BAR 0: failed to assign [mem size 0x02000000]
[ 0.738693] pci 0000:10:00.0: BAR 5: no space for [io size 0x0080]
[ 0.738695] pci 0000:10:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.739466] pci 0000:10:00.0: BAR 0: assigned [mem 0xbc000000-0xbdffffff]
[ 0.739477] pci 0000:10:00.0: BAR 5: no space for [io size 0x0080]
[ 0.739479] pci 0000:10:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.740257] pci 0000:10:00.0: BAR 5: no space for [io size 0x0080]
[ 0.740258] pci 0000:10:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.740963] pci 0000:10:00.0: BAR 5: no space for [io size 0x0080]
[ 0.740965] pci 0000:10:00.0: BAR 5: failed to assign [io size 0x0080]
[ 14.167005] [drm] Initialized nvidia-drm 0.0.0 20150116 for 0000:10:00.0 on minor 2
Quote:
dmesg | grep 11:00.0
[ 0.649526] pci 0000:11:00.0: [10de:06d2] type 00 class 0x030200
[ 0.649551] pci 0000:11:00.0: reg 0x10: [mem 0xca000000-0xcbffffff]
[ 0.649573] pci 0000:11:00.0: reg 0x14: [mem 0x60000000-0x67ffffff 64bit pref]
[ 0.649595] pci 0000:11:00.0: reg 0x1c: [mem 0x5c000000-0x5fffffff 64bit pref]
[ 0.649610] pci 0000:11:00.0: reg 0x24: [io 0x8c00-0x8c7f]
[ 0.649625] pci 0000:11:00.0: reg 0x30: [mem 0xc9f80000-0xc9ffffff pref]
[ 0.735398] pci 0000:11:00.0: BAR 1: no space for [mem size 0x08000000 64bit pref]
[ 0.735400] pci 0000:11:00.0: BAR 1: failed to assign [mem size 0x08000000 64bit pref]
[ 0.735402] pci 0000:11:00.0: BAR 3: no space for [mem size 0x04000000 64bit pref]
[ 0.735404] pci 0000:11:00.0: BAR 3: failed to assign [mem size 0x04000000 64bit pref]
[ 0.735406] pci 0000:11:00.0: BAR 0: no space for [mem size 0x02000000]
[ 0.735407] pci 0000:11:00.0: BAR 0: failed to assign [mem size 0x02000000]
[ 0.735409] pci 0000:11:00.0: BAR 6: no space for [mem size 0x00080000 pref]
[ 0.735411] pci 0000:11:00.0: BAR 6: failed to assign [mem size 0x00080000 pref]
[ 0.735416] pci 0000:11:00.0: BAR 5: no space for [io size 0x0080]
[ 0.735418] pci 0000:11:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.736227] pci 0000:11:00.0: BAR 1: assigned [mem 0x58000000-0x5fffffff 64bit pref]
[ 0.736242] pci 0000:11:00.0: BAR 3: assigned [mem 0x54000000-0x57ffffff 64bit pref]
[ 0.736257] pci 0000:11:00.0: BAR 0: no space for [mem size 0x02000000]
[ 0.736259] pci 0000:11:00.0: BAR 0: failed to assign [mem size 0x02000000]
[ 0.736264] pci 0000:11:00.0: BAR 5: no space for [io size 0x0080]
[ 0.736266] pci 0000:11:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.737114] pci 0000:11:00.0: BAR 0: no space for [mem size 0x02000000]
[ 0.737116] pci 0000:11:00.0: BAR 0: failed to assign [mem size 0x02000000]
[ 0.737122] pci 0000:11:00.0: BAR 5: no space for [io size 0x0080]
[ 0.737123] pci 0000:11:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.737922] pci 0000:11:00.0: BAR 0: no space for [mem size 0x02000000]
[ 0.737924] pci 0000:11:00.0: BAR 0: failed to assign [mem size 0x02000000]
[ 0.737929] pci 0000:11:00.0: BAR 5: no space for [io size 0x0080]
[ 0.737931] pci 0000:11:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.738715] pci 0000:11:00.0: BAR 0: no space for [mem size 0x02000000]
[ 0.738717] pci 0000:11:00.0: BAR 0: failed to assign [mem size 0x02000000]
[ 0.738722] pci 0000:11:00.0: BAR 5: no space for [io size 0x0080]
[ 0.738724] pci 0000:11:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.739501] pci 0000:11:00.0: BAR 0: assigned [mem 0xc0000000-0xc1ffffff]
[ 0.739512] pci 0000:11:00.0: BAR 5: no space for [io size 0x0080]
[ 0.739514] pci 0000:11:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.740281] pci 0000:11:00.0: BAR 5: no space for [io size 0x0080]
[ 0.740282] pci 0000:11:00.0: BAR 5: failed to assign [io size 0x0080]
[ 0.740987] pci 0000:11:00.0: BAR 5: no space for [io size 0x0080]
[ 0.740988] pci 0000:11:00.0: BAR 5: failed to assign [io size 0x0080]
[ 14.167206] [drm] Initialized nvidia-drm 0.0.0 20150116 for 0000:11:00.0 on minor 3

Last fiddled with by bgbeuning on 2016-05-23 at 00:45
bgbeuning is offline   Reply With Quote
Old 2016-05-23, 01:45   #47
anonymous
 

11×673 Posts
Default

Try this (noting that it might cause the system to crash or lock up)::
sudo setpci -v -s 10:00.0 BASE_ADDRESS_5=0000dc01

Then try rescaning the bus, as before.

Here I am trying to manually assign some IO space to the GPU that isn't working, then get it picked up by the bus. However, possibly compatible settings need to be put in place manually on the bus too. For that, it might help to see:
sudo lspci -s 0e:04.0 -vv
sudo lspci -s 0b:00.0 -vv

The above were two bridges that gave errors in the logs you showed earlier. It would also be helpful to have two other equivalent ones that did not:
sudo lspci -s 14:04.0 -vv
sudo lspci -s 09:04.0 -vv

Also, please can you try the following; it might give different tree output:
lspci -btv
  Reply With Quote
Old 2016-05-23, 02:39   #48
anonymous
 

22×593 Posts
Default

Please can you also send the output from the simpleP2P program in the CUDA samples? It would be very helpful for me to know if GPUDirect is working on your system. To do this, go to the cuda samples directory. If you haven't already installed them, just run:
cuda-install-samples-7.5.sh <dirname>

Then, in the directory that you just specified, please go to:
0_Simple/simpleP2P/
Then run make, then run ./simpleP2P when make has finished.

The initial part of my output is as follows. I'm hoping that yours will say "yes" where mine says "no":
Quote:
CUDA-capable device count: 4
> GPU0 = " Tesla M2070" IS capable of Peer-to-Peer (P2P)
> GPU1 = " Tesla M2070" IS capable of Peer-to-Peer (P2P)
> GPU2 = " Tesla M2070" IS capable of Peer-to-Peer (P2P)
> GPU3 = " Tesla M2070" IS capable of Peer-to-Peer (P2P)

Checking GPU(s) for support of peer to peer memory access...
> Peer access from Tesla M2070 (GPU0) -> Tesla M2070 (GPU1) : No
> Peer access from Tesla M2070 (GPU0) -> Tesla M2070 (GPU2) : No
> Peer access from Tesla M2070 (GPU0) -> Tesla M2070 (GPU3) : No
> Peer access from Tesla M2070 (GPU1) -> Tesla M2070 (GPU0) : No
> Peer access from Tesla M2070 (GPU1) -> Tesla M2070 (GPU2) : No
> Peer access from Tesla M2070 (GPU1) -> Tesla M2070 (GPU3) : No
> Peer access from Tesla M2070 (GPU2) -> Tesla M2070 (GPU0) : No
> Peer access from Tesla M2070 (GPU2) -> Tesla M2070 (GPU1) : No
> Peer access from Tesla M2070 (GPU2) -> Tesla M2070 (GPU3) : No
> Peer access from Tesla M2070 (GPU3) -> Tesla M2070 (GPU0) : No
> Peer access from Tesla M2070 (GPU3) -> Tesla M2070 (GPU1) : No
> Peer access from Tesla M2070 (GPU3) -> Tesla M2070 (GPU2) : No

Last fiddled with by anonymous on 2016-05-23 at 02:40
  Reply With Quote
Old 2016-05-24, 22:39   #49
bgbeuning
 
Dec 2014

3·5·17 Posts
Default

When I powered up the C410X, 4 boards did not start. After pushing their reset
buttons, 2 started but 2 are still off. The result is the PCI 10 address is online
this time. I am pretty convinced the IO ports are why I am only seeing 7 of 8
cards, and for me the simplest solution is to switch to 4:1 mode and use all
4 nodes in the C6100.

Here are some of the output you asked for. The simpleP2P had yes instead of no.
Attached Files
File Type: txt btv.txt (9.7 KB, 268 views)
File Type: txt simpleP2P.txt (3.9 KB, 267 views)
bgbeuning is offline   Reply With Quote
Old 2016-05-24, 22:45   #50
anonymous
 

22×3×5×163 Posts
Default

Thanks - it's good to know that GPUDirect works.

The new lspci output unfortunately didn't give me any more ideas about your issue.

I would suggest trying the ipmi commands for the C410X that I linked to before: http://www.dell.com/support/article/us/en/04/SLN244176. Sometimes a few of my cards do not start, and I am able to fix the problem by using power commands to individual slots, as described in that link.
  Reply With Quote
Old 2016-06-08, 12:38   #51
anonymous
 

24×32×11 Posts
Default

Just a brief update about how things eventually turned out for me:
I purchased a Dell C6220 II on eBay. It uses newer Sandy Bridge or Ivy Bridge CPUs, compared to the Nehalem or Westmere on the C6100, and supports full 64-bit addressing for PCI, hence can potentially work with more GPUs and newer GPUs. It is also available in a configuration with 2x2U nodes with 2 full-height PCI slots per node, rather than the 4x1U nodes of the C6100 with only one low profile PCI slot.

I also eventually found that the system board in the C410X did not need to be replaced to enable 8:1 mode; instead, the upper iPass board needed to be replaced. Both versions have the same part number (H0NJN), but the new one says "Rev 2.1" on the PCB, while the original says "Rev 1.0". The new one also has a sticker marked "DJCN65", while the older one is marked "BGCN15". I now have 8:1 working without a new system board. I have firmware 1.35 loaded.

8 GPUs per node works fine with my C6220 II. I even tried a dual HIC card (NVIDIA P894, require a full-height PCI Express slot), but found that I could get only up to 10 GPUs per node - there were insufficient I/O ports available for more. Even that required disabling the mezzanine card slot (in which I have an Infiniband card) and the RAID controller, booting only from the onboard SD card slot. There is a UEFI option in the BIOS that I haven't tried yet - I am hopeful that it might allow an extra 1-2 GPUs per node by freeing some IO ports that are currently allocated to bridges, but are seemingly unused..

Last fiddled with by anonymous on 2016-06-08 at 12:45
  Reply With Quote
Old 2016-06-08, 23:40   #52
bgbeuning
 
Dec 2014

3·5·17 Posts
Default

Sounds like you are making great progress. Glad to hear it.

Sad the nvidia driver is closed source, someone could change it to
1. Allocate IO ports
2. Init the GPU
3. Release the IO ports
4. Repeat for all GPU
bgbeuning is offline   Reply With Quote
Old 2016-06-09, 05:26   #53
xilman
Bamboozled!
 
xilman's Avatar
 
"๐’‰บ๐’ŒŒ๐’‡ท๐’†ท๐’€ญ"
May 2003
Down not across

1166210 Posts
Default

Quote:
Originally Posted by bgbeuning View Post
Sounds like you are making great progress. Glad to hear it.

Sad the nvidia driver is closed source, someone could change it to
1. Allocate IO ports
2. Init the GPU
3. Release the IO ports
4. Repeat for all GPU
"Someone" could also ask Nvidia to make those changes ...
xilman is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Running multiple ecm's johnadam74 GMP-ECM 21 2019-10-27 18:04
Multiple GPU's Windows 10 cardogab7341 GPU Computing 5 2015-08-09 13:57
Using multiple PCs numbercruncher Information & Answers 18 2014-04-17 00:17
Running multiple copies of mprime on Linux hc_grove Software 3 2004-10-10 15:34
Multiple systems/multiple CPUs. Best configuration? BillW Software 1 2003-01-21 20:11

All times are UTC. The time now is 21:07.


Sun Feb 5 21:07:45 UTC 2023 up 171 days, 18:36, 1 user, load averages: 0.90, 0.93, 1.01

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

โ‰  ยฑ โˆ“ รท ร— ยท โˆ’ โˆš โ€ฐ โŠ— โŠ• โŠ– โŠ˜ โŠ™ โ‰ค โ‰ฅ โ‰ฆ โ‰ง โ‰จ โ‰ฉ โ‰บ โ‰ป โ‰ผ โ‰ฝ โŠ โŠ โŠ‘ โŠ’ ยฒ ยณ ยฐ
โˆ  โˆŸ ยฐ โ‰… ~ โ€– โŸ‚ โซ›
โ‰ก โ‰œ โ‰ˆ โˆ โˆž โ‰ช โ‰ซ โŒŠโŒ‹ โŒˆโŒ‰ โˆ˜ โˆ โˆ โˆ‘ โˆง โˆจ โˆฉ โˆช โจ€ โŠ• โŠ— ๐–• ๐–– ๐–— โŠฒ โŠณ
โˆ… โˆ– โˆ โ†ฆ โ†ฃ โˆฉ โˆช โŠ† โŠ‚ โŠ„ โŠŠ โŠ‡ โŠƒ โŠ… โŠ‹ โŠ– โˆˆ โˆ‰ โˆ‹ โˆŒ โ„• โ„ค โ„š โ„ โ„‚ โ„ต โ„ถ โ„ท โ„ธ ๐“Ÿ
ยฌ โˆจ โˆง โŠ• โ†’ โ† โ‡’ โ‡ โ‡” โˆ€ โˆƒ โˆ„ โˆด โˆต โŠค โŠฅ โŠข โŠจ โซค โŠฃ โ€ฆ โ‹ฏ โ‹ฎ โ‹ฐ โ‹ฑ
โˆซ โˆฌ โˆญ โˆฎ โˆฏ โˆฐ โˆ‡ โˆ† ฮด โˆ‚ โ„ฑ โ„’ โ„“
๐›ข๐›ผ ๐›ฃ๐›ฝ ๐›ค๐›พ ๐›ฅ๐›ฟ ๐›ฆ๐œ€๐œ– ๐›ง๐œ ๐›จ๐œ‚ ๐›ฉ๐œƒ๐œ— ๐›ช๐œ„ ๐›ซ๐œ… ๐›ฌ๐œ† ๐›ญ๐œ‡ ๐›ฎ๐œˆ ๐›ฏ๐œ‰ ๐›ฐ๐œŠ ๐›ฑ๐œ‹ ๐›ฒ๐œŒ ๐›ด๐œŽ๐œ ๐›ต๐œ ๐›ถ๐œ ๐›ท๐œ™๐œ‘ ๐›ธ๐œ’ ๐›น๐œ“ ๐›บ๐œ”