Experimented with fiber channel

Started by ggnfs000, June 08, 2017, 05:41:35 PM

Previous topic - Next topic

ggnfs000

Took a day or two in fiber channel configuration crash course and after that I surprisingly setup working disk array.
Had one MDS in the lab with one pre-configured disk array <vendor1> and bunch of UCS blade. This setup is working. I setup another disk array running on Cisco rack server with <vendor2> DISK SAN management software. It is pretty straightforward, I just kind of looked at pre-configured <vendor1> setup and kind of translated it into new configuration onto <vendor2>. Most of the disk involved creating vdisk on <vendor2> rack server and serving the host and creating additional zones encompassing the <vendor2> and other connected UCS hosts.

This worked for up to 2 virtual disks created on <vendor2> with one of the UCS Fabric interconnects.
Created add'l virtual disks (3rd and 4th and so on) and decided that those will server anohter fabric interconnect. I created the zone the same way as first two hard drive and specified the WWPN in the zone in MDS but <vendor2> disk array software just wont see it. By saying "not seeing" I meant the rack server's FC port is not seeing the host port I defined in add'l FI.

I am kind of suspecting whether this is due to 2-port Emulex FC adapter installed onto <vendor2> rack server. Can 2-port adapter can serve (through multplexing) multiple hosts and multiple vdisks???

Dieselboy

Nice!  What's the key things you're taking away from the lab? eg lessons-learned.

I've not set up FC but I'm aware that the Fabric Interconnects FC ports must be at the left-most side of the switch (or was it right-most?). Basically, you can't have Ethernet ports 1-8 and FC ports 9-16 and then Ethernet from 17-32 for example.

How similar is FC to iSCSI?

NetworkGroover

#2
Why did you choose to learn about FC?  Do you have to support it?

FC needs to just die.

Is anybody standing up greenfield DCs with FC?  Not being snarky I'm seriously asking, and I'm curious as to the reasons they didn't go with IP storage instead.

EDIT - I guess I should state just in case it isn't obvious that this is my opinion - not a statement of fact.
Engineer by day, DJ by night, family first always

ggnfs000

#3
Quote from: Dieselboy on June 09, 2017, 02:11:36 AM
Nice!  What's the key things you're taking away from the lab? eg lessons-learned.

I've not set up FC but I'm aware that the Fabric Interconnects FC ports must be at the left-most side of the switch (or was it right-most?). Basically, you can't have Ethernet ports 1-8 and FC ports 9-16 and then Ethernet from 17-32 for example.

How similar is FC to iSCSI?

I'd say fair amount of studying, there are one-to-one mapping similarities between many LAN and SAN terms, so knowing LAN helped up to a certain degree.
Layers FC layers vs internetworking layers,  VLAN vs. VSAN, WWPN, WWNN vs. MAC.
But zoning was new. Disk array software from each vendor is different.

This was a pure FC, then i haven't even started getting into a FCOE which was FC over Ethernet.

The initiator and target concepts in iSCSI was similar to FC.

"I've not set up FC but I'm aware that the Fabric Interconnects FC ports must be at the left-most side of the switch (or was it right-most?). Basically, you can't have Ethernet ports 1-8 and FC ports 9-16 and then Ethernet from 17-32 for example. "
I dont think this is necessary. With modular switch, you can plug in either FC module or ethernet module. But not exactly sure though.



ggnfs000

Quote from: AspiringNetworker on June 09, 2017, 12:48:36 PM
Why did you choose to learn about FC?  Do you have to support it?

FC needs to just die.

Is anybody standing up greenfield DCs with FC?  Not being snarky I'm seriously asking, and I'm curious as to the reasons they didn't go with IP storage instead.

EDIT - I guess I should state just in case it isn't obvious that this is my opinion - not a statement of fact.

With FCOE being ubiquitouous, I imagine it will go extinct soon. One cable transmissing both storage and ethernet traffic. However knowing to a certain degree would not hurt so I thought.

wintermute000

Getting OT but re: the market discussion

FCoE is only ubiquitous as in one-hop FCoE into a UCS Fabric Interconnect (in UCS environments only, natch) that takes it as actual FC back to the array. Cisco's FCoE everywhere dream died a long time ago. I don't know a single customer running FCoE anywhere but into FIs as the first hop. And you still need to know FC for FCoE anyway. Its just swapping out layer 1/2, 3 onwards is pure FC.

And pretty much everyone is lining up behind iSCSI.

And I haven't even mentioned hyperconverged...

snarky fact: refer to the array as the SAN to storage/FC guys and watch them get angry. SAN refers to the FC network (clue: Storage Area NETWORK), not the disks LOLOL

ggnfs000

Quote from: wintermute000 on June 10, 2017, 06:38:55 AM
Getting OT but re: the market discussion

FCoE is only ubiquitous as in one-hop FCoE into a UCS Fabric Interconnect (in UCS environments only, natch) that takes it as actual FC back to the array. Cisco's FCoE everywhere dream died a long time ago. I don't know a single customer running FCoE anywhere but into FIs as the first hop. And you still need to know FC for FCoE anyway. Its just swapping out layer 1/2, 3 onwards is pure FC.

And pretty much everyone is lining up behind iSCSI.

And I haven't even mentioned hyperconverged...

snarky fact: refer to the array as the SAN to storage/FC guys and watch them get angry. SAN refers to the FC network (clue: Storage Area NETWORK), not the disks LOLOL

Interesting, so you think iscsi is the future? yes the FC is as complicated. Is is like running another full network besides ethernet in the data center.

wintermute000

I'm no storage guru but I don't see any expansion or new builds of FC networks. Everything new is going IP storage and/or hyperconverged. Cloud is also pushing the trend against in-house massive FC storage networks.

ggnfs000

Yes hyper convergence and webscale or whatever you name it becoming a trend. Dc made of pods each contributing itty bitty amount of network, storage and compute.

ggnfs000

Now the real focal point of discussion was left. Was hoping to get FC expert hint on this one:

Got two virtual disk on san array serving host through 2 Emulex FC ports. If I create add'l ones, it will not see additional host ports at all. Remember host port is a WWPN from initiator side where FC endpoint seeks to access the SAN array disk resource.

So I am not sure this is the because of 2-ports are limiting the number of disks available on the SAN. I am assuming that is not because, if one to create hundreds of virtual disk on SAN, and if each one would require physical ports, no that can be right it is a horrible technological limitation. Single FC port on SAN array should be able to multiplex several SAN disk protocols (in another words SAN protocols destined for several different disks should be able to pass through arbitrary number of ports I assume. If that is the case, which is likely, something not being done right in my setup.


NetworkGroover

#10
Quote from: wintermute000 on June 10, 2017, 11:48:59 PM
I'm no storage guru but I don't see any expansion or new builds of FC networks. Everything new is going IP storage and/or hyperconverged. Cloud is also pushing the trend against in-house massive FC storage networks.

+1000

I'm no guru either, but I'll predict FC is going to shrink as the years move on.  Just think about it - folks are trying to consolidate and FC makes it so that you have to have separate equipment, with a separate skillset to manage.  In the past it had its advantages in performance, but now as bandwidth increases and IP storage continues to prove itself... those two pieces I just mentioned are MAJOR drawbacks.  It's only going to get worse as times moves on.

EDIT - And sorry for hijacking the thread with my personal musings.  I'll shaddap now. xD
Engineer by day, DJ by night, family first always

ggnfs000

Quote from: AspiringNetworker on June 12, 2017, 02:27:09 PM
Quote from: wintermute000 on June 10, 2017, 11:48:59 PM
I'm no storage guru but I don't see any expansion or new builds of FC networks. Everything new is going IP storage and/or hyperconverged. Cloud is also pushing the trend against in-house massive FC storage networks.

+1000

I'm no guru either, but I'll predict FC is going to shrink as the years move on.  Just think about it - folks are trying to consolidate and FC makes it so that you have to have separate equipment, with a separate skillset to manage.  In the past it had its advantages in performance, but now as bandwidth increases and IP storage continues to prove itself... those two pieces I just mentioned are MAJOR drawbacks.  It's only going to get worse as times moves on.

EDIT - And sorry for hijacking the thread with my personal musings.  I'll shaddap now. xD

I agree with this general trend, it is extra equipment and extra training. But today, my problem remained haha. Definitely need more advanced debugging. After zone creation, initiator and target just not seeing each other. May be  will find something.

deanwebb

I have a suggestion: set the FC labbing aside for a little while and try to solve the same issue with iSCSI. Think of it as a comparative "bake-off" contest between the technologies.

If you have success with iSCSI, not only will you consider that in your evaluation, but it may give you some insight into how to resolve the FC issue.

If you do not have success with iSCSI, you may still get insight into how to roll with FC... sometimes we get insights into our current problems by trying to solve other ones.
Take a baseball bat and trash all the routers, shout out "IT'S A NETWORK PROBLEM NOW, SUCKERS!" and then peel out of the parking lot in your Ferrari.
"The world could perish if people only worked on things that were easy to handle." -- Vladimir Savchenko
Вопросы есть? Вопросов нет! | BCEB: Belkin Certified Expert Baffler | "Plan B is Plan A with an element of panic." -- John Clarke
Accounting is architecture, remember that!
Air gaps are high-latency Internet connections.

ggnfs000

turns out there is an fcping and fctrace command. the offending wwpn, when pinged says "No switch with the given wwn is present in the vsan".
The funny thing is seeing same error message with the working array.
When using fctrace, it says "Name-server does not have an entry for this pwwn." I think I am closing in.

ggnfs000

Quote from: deanwebb on June 14, 2017, 11:53:30 AM
I have a suggestion: set the FC labbing aside for a little while and try to solve the same issue with iSCSI. Think of it as a comparative "bake-off" contest between the technologies.

If you have success with iSCSI, not only will you consider that in your evaluation, but it may give you some insight into how to resolve the FC issue.

If you do not have success with iSCSI, you may still get insight into how to roll with FC... sometimes we get insights into our current problems by trying to solve other ones.

THat would be interesting experiment, for the lab experiment I am doing it is a actually a project to setup FC environment. I should not have called it experiment. Well, experiment in the sense, I started out doing with no certainty I will be able to do it.