Trying to set up a new system as below (part of a FlexPod implementation):
Ontap 8.0.2 7-mode
Cabling is as the NetApp guide for dual chassis HA
One controller boot fine & can be configured; the other complains that it has 0 disks & continually reboots.
The problem controller will boot into maintenance mode, but the "disk show" & "disk assign" commands don't work as expected, showing no unassigned disks available, although "disk show" on the working controller shows half of the disks unassigned.
I've tried swapping the controllers over to eliminate cabling issues & the fault remains with the same physical controller.
I'd appreciate any ideas as I've run out of them, output from a failed boot cycle below:
Sun Oct 14 09:51:42 GMT [config.noPartnerDisks:CRITICAL]: No disks were detected for the partner; this node will be unable to takeover correctly
Sun Oct 14 09:51:42 GMT [callhome.dsk.config:warning]: Call home for DISK CONFIGURATION ERROR
Sun Oct 14 09:51:43 GMT [fmmb.instStat.change:info]: no mailbox instance on local side.
Sun Oct 14 09:51:43 GMT [fmmb.instStat.change:info]: no mailbox instance on partner side.
Sun Oct 14 09:51:43 GMT [cf.fm.noMBDisksOrIc:warning]: Could not find the local mailbox disks. Could not determine the firmware state of the partner through the HA interconnect.
WARNING: 0 disks found!
Storage Adapters found:
0 Fibre Channel Storage Adapters found!
6 SAS Adapters found!
0 Parallel SCSI Storage Adapters found!
0 ATA Adapters found!
Target Adapters found:
4 Fibre Channel Target Adapters found!
2 iSCSI Target Adapters found!
1 Unknown Target Adapters found!
Check that disks have been assigned ownership to this system (ID 1575087607) using the 'disk show' and 'disk assign'
commands from maintenance mode.
I am guessing that all of the disks are owned by the other controller. If they have all been assigned to the other controller, but not added to an aggregate, you can change the ownership to the other controller by using the "disk assign <disk_id> -o unowned" command.
Once you do that, 'disk show -n' should now show that disk as being unowned.
You will need a minimum of three disks to create an aggregate. They will need to be zeroed out as well. This will take quite some time if they are SATA drives. Not so long if they are SAS drives.
If the disks have already been added to an aggregate, you will need to destroy the aggregate before using the "disk assign <disk_id> -o unowned" command.
According to you, you are able to see the half number of disks on the first node which is up and running.
go into the maintainence mode of troubled node while booting and issue disk show -n and sysconfig -a to make sure you are able to see all the loops and associated disks.
Then bring all the disk show -n output to a excel sheet and segregate the disks owned by node 1 is which is up and running.
the left over disks must be owned by former controller or previous used shelfs, So change the ownership of them to system-id of your troubled node.
Once owner ship is changed and zero the disks, it may take couple of hours to complete but you are good to go from there.
If you still have issues, contact netapp or professional services.