This page was exported from phaq
[ http://phaq.phunsites.net ] Export date: Wed Jun 25 1:49:57 2025 / +0000 GMT |
Last week I got myself a now toy at work: a Transtec PROVIGO 410E iSCSI RAID device. The task was to review if it could serve its purpose as (expandable) external storage system for my company's (then still new-to-be-built, now in service) ftp mirror, a disk-based backup system and other possible areas of working. This hardware was available for testing:
This article will look into intial setup of the Provigo, especially step-by-step via serial console, which is not covered in the official manuals. Furthermore connecting the device to a frontend server (single-host configuration via iSCSI, global file system not considered for now) running RedHat Enterprise Linux will be outlined. #1 Getting Rid Of The Packaging Initially, the device was ordered back in April 2007. Due to some stock shortage delivery was first scheduled for mid of May 2007 and was later re-scheduled for June. When it arrived there came that incredibly huge box which had to be lifted around by two people because of its size and weight (~ 50 kg's). Finally, after opening the box, I stood in front of the device. Along with it a pair of heavy rails, a serial cable and a documentation CD-ROM were included. Needless to say that a power cable with a german plug was shipped. Amazing that there are still some vendors who have not realized that we have a different power plug in Switzerland which is physically totally incompatible to the german ones. Well, let's leave this aside for now and look into the configuration stuff. #2 Preparing the Frontend Server First things first, I prepared the frontend server with a clean minimum install of RedHat Enterprise Linux 5.0. While the basic installation is clearly beyond the scope of this article, below are the additional steps I have gone through. #3 Disable Xen Console If you happen to have only one serial port and also a Xen-enabled kernel installed, the serial console must be released from Xen. This is done by adding xencons=off to the kernel appen line(s) in /etc/grub.conf like this:
#3 Install minicom For serial management a terminal emulator is required. I choose minicom which requires these rpm's to be installed (sample applies to x86_64).
If your RHEL is configured to use a local repository via yum, you may run this instead:
To configure minicom run:
This will enter the setup dialog where I choose serial port setup first. The Provigo has a 115200 8N1 factory default for the serial console, so I changed my settings as shown below. I assumed /dev/ttyS0 (COM1) from my current hardware configuration.
Back to the main menu I entered modem and dialing where I removed the init strings so no AT init commands are sent to the host accidentally. Afterwards I choose save setup as dfl and exit from minicom. #4 Configure the Provigo To configure the Provigo attach the serial cable shipped with the device to the RJ11 jack on the Provigo and the RS232 connector on the frontend host. No start minicom and press enter. If everything works out you should receive a login prompt.
The login name is root with a default password of root. You may do the initial setup by running the 'dasetup' command for Q&A based setup. This will however only cover some basic settings like IP address and hostname. Advanced configuration is still to be done manually. For this reason I'll apply all configuration manually. First of all, set some arbitary hostname, a name server and - if required - a default route.
Now the network interface must be configured for either default a mtu of 1500...
or a mtu of 9000 aka jumbo frames.
Even if source addresses can be spoofed eventually, adding an IP access control list is never wrong. So this will first remove the default acl and add the test network and loopback range to the access list:
And don't forget to set new passwords for guest (unprivileged login) and root (admin login).
Mind that the dapasswd command does not support single and double quotes. If you have a password which includes special characters, it must be set like this:
You should also set the time zone and time/date of the device. Refer to dadate -h for the input format.
Optionally email notification can be enabled as seen in daaddrbk -h command. #5a Create Custom iSCSI Targets Now that the basic configuration is complete it's about time to define our custom iSCSI targets. In client-server terminology, an iSCSI target refers to the server-side, which exports a block device to a client. Some basic rules for iSCSI target names, which are also known as IQNs (iSCSI qualified names), define that an IQN consists of multiple tokens separated by dots and colons. 1st part is always the keyword 'iqn' 2nd part equals to year and month when the domain name (see also 3rd part) was acquired, eg. 2004-07 3rd part is the reversed domain name, e.g. phunsites.net becomes net.phunsites 4th part is a colon as delimiter 5th part is a string or serial number which should refer to the storage system, eg. mac address, hostname, etc So when getting this altogether a valid IQN could read like this:
If you do now own a domain name or want to be strictly internal something like this could also be used:
After having defined the proper IQN string it can be set and verified as follows:
#5b Configure Disk Groups Let's have a look into disk groups, which are actually the RAID sets. Disk groups may be created from all empty/unassigned disks. List them by means of the dadg command.
To create three disk groups with five disks per group forming a RAID5 use these commands:
Then verify how it looks like:
You may also get specific information about any existing disk group.
#5c Create Volumes Volumes are created from disk groups. Each disk group can host multiple volumes which will be exported to either one single host at a time or - by using an abstraction layer like global filesystem - to multiple hosts at once. Volumes are maintained by the the davd command. Refer to davd -h for details on the arguments.
#5d Add Initators To allow any arbitary host to connect it must be added to the Provigo initiator list. Again in client-server terms, the initator refers to the client connecting the the target (server). The dahost command servers that purpose and will also allow to use initiator secrets for additional security. I omitted the latter one for the sake of simplicity.
#5e Export LUNs to Initators Now that our initiators are known to the system, the Volumes must be exported to the iniators. For that purpose the dalun is used, which will assign a LUN (logical unit numbers) for each volume.
#6 Configure Networking The Provigo manual points out multiple possibilities on network connections which include variants of channel bonding and different MTU sizes. These may or may affect the effective transfer rate for all frames, however this depends heavily on the usage of the iSCSI resource and how the data looks like (e.g. many small file transfers or less big file transfers). If you want to use jumbo frames (MTU 9000) you will need to alter your /etc/sysconfig/network-scripts/ifcfg-ethX file. Add an MTU statement to set the value for the new MTU size.
If you want to try your luck with channel bonding, add this to /etc/modprobe.conf:
This will enable active-backup mode, which usually works best. Refer also to README.bonding, which should be somewehre on your systems, for other bonding modes. If your bonding devices includes eth2, your network configuration should read like this for /etc/sysconfig/network-scripts/ifcfg-eth2:
And for /etc/sysconfig/network-scripts/ifcfg-eth3 eventually:
The network configuration goes to /etc/sysconfig/network-scripts/ifcfg-bond0:
Again you may include the MTU=9000 option to enable jumbo frames. To bring up the bonding interface manually use this for example:
#7 Configure iSCSI Initiator Now the frontend host needs to be configured. First of all installation of an iSCSI initiator software is required. On RedHat Enterprise Linux 5.0 this can be installed as easy as:
or
Then make sure the services are enabled properly, but not yet started.
Add the initiator name to /etc/iscsi/initatorname.iscsi. This should correspond to the initator name used earlier with the dahost command and also comply to existing host name entries in the dns zone if they also exist.
Then the iscsid daemon may be started at first.
Run a discovery against the iSCSI target's IP address or hostname. This should reveal the iSCSI target and bind a persistent connection to it.
Then the iscsi service may be started:
If everything went well something like this should arise in the system logs:
The device should then also be visible in /proc/scsi/scsi:
#8 Working With Disks Now you can work with the disks as if they were locally installed to the system. They are initialized like any other disk device would be, too:
If you want the iSCSI block devices to be mounted via fstab on startup, the entry should include the _netdev keyword. This will ensure that the network is available before the device is to be mounted.
#9 Conclusions While I'm still hacking up some good benchmark scripts, I can tell for sure that the system performs very well and definitily fits my purpose. As this is the first iSCSI based device I've ever got, implementation and usage is very simple and works reliable. I'm in doubt however wether an iSCSI device is good enough when it comes to high-load scenarios where every bit of I/O counts. This is definitely one thing I want to test out in-depth during the next weeks. Despite still in evaluation, this review has already left its markings on the wall. There were some odd's and evens which I came along and as such are to be mentioned here. First, and already mentioned earlier, is the fact why I didn't get a power cable with a swiss power plug. It _may_ actually sound pedantic, but when ordering anything from a vendor in Switzerland I do expect to receive to correct equipment. My second thoughts care about the documentation. I must admit that the guys at Transtec did a very good job in writing the handbook. It is very detailed and covers almost everything one needs to know. It lacks however a step-by-step description how setup is done via the serial console. While all commands are properly documented, one get's no idea whatsoever in which order they must be run. It's a matter of puzzling it together by logical evaluation and a bit of trial and error. One might argue that the docs cover step-by-step configuration for the web management interface. Acknowledged. This exists and is indeed very detailed and to the point. But still, I know a lot of poeple who don't trust in web interfaces (me included). Providing a little check list would be sufficient enough but still of great help to get things done faster. The third thing noted is the inconsequent usage of command line arguments within the CLI. This can be seen for example when it comes to reviewing settings.
As the da* commands are clearly part of the Provigo firmware, I can't follow the reason why the all have different syntax. It would be more straight-forward if they'd follow a common guideline and define the same arguments for particular functions to be identical accross all utilities. To make things even worse in the current implementation, there exist also variations which require the admin to _know_ in advance what he is querying for. An example of this:
davd is unable to show all defined volumes at once, there's simply no option for this. So I _must_ know the exact names of my volumes. But where do I get them from if I happen to forget the volume names one day? Also the error reporting could be better or clearer. What would this error message mean:
Host not found. Errr... First thing coming into my mind: host not found. Where? In DNS. In /etc/hosts? Where!? Making the error message say something like 'Please add Host "HostXYZ" with "dahost" command first' would make things much more obvious. Fourth thing to note is protocol support in Provigo. It supports beneath http protocol also the serial console for management and ... telnet. Huh!? telnet? Telnet to be used for management purpose looks like an ancient dinosaur to me!? Guys, we have 2007! Even if an iSCSI-based storage network is supposed to be separated from public networks and therefore be closed down, using telnet for remote management is neither state-of-the-art nor secure. I see no reason why Provigo could not support SSH. It has plenty of RAM, a decent CPU. Also the base OS image, which is build on some Linux, is around ~52 MiB with ~48 MiB free space, so room enough to include the SSH daemon. Why don't you do so? A fifth thing sawn was a bug in dastat command:
Messages like [: !=: unary operator expected look definitely not good, especially when it's simply a matter of proper shell scripting syntax. When looking at /usr/bin/dastat there is this line:
Changing it as follows removes the error message:
Despite these things, which could be regarded as minor glitches, I must admit that the Provigo 410E does an excellent job for a decent price. Given it's modular design it's scalable and can fit multiple scenarios from single iSCSI environments up to mixed iSCSI and FC environments. As mentioned before I'm doing some benchmarking currently, which I hope to reveal some interesting information in terms of throughput and I/O performance. |
Powered by [ Universal Post Manager ] plugin. HTML saving format developed by gVectors Team www.gVectors.com |