Openfiler is a GPLv2-licensed open-source storage management appliance. Openfiler offers a convenient web-based management interface for various network storage services, supporting NFS, CIFS, HTTP/Dav, RSync, iSCSI, etc. In this tutorial, I will explain how to build a network attached storage (NAS) server with Openfiler. Jul 18, 2011 - In background its just redhat rhel linux. Iscsi-initiator [client]. In this example we are using openfiler as iscsi target and a Rhel5 machine as iscsi initiator. For that we need to install the following package.
Note: This is an. Presentation In the iSCSI world, you’ve got two types of agents:. an iSCSI target provides some storage (here called server),. an iSCSI initiator uses this available storage (here called client). As you already guessed, we are going to use two virtual machines, respectively called server and client. If necessary, the server and client virtual machines can be one and only one machine. ISCSI Target Configuration Most of the target configuration is done interactively through the targetcli command.
This command uses a directory tree to access the different objects. To create an iSCSI target, you need to follow several steps on the server virtual machine. Install the following packages: # yum install -y targetcli Activate the target service at boot: # systemctl enable target Note: This is mandatory, otherwise your configuration won’t be read after a reboot! Execute the targetcli command: # targetcli Warning: Could not load preferences file /root/.targetcli/prefs.bin. Targetcli shell version 2.1.fb34 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. / You’ve got two options:.
You can create a fileio backstore called shareddata of 100MB in the /opt directory (don’t hesitate to use tab completion): / backstores/fileio/ create shareddata /opt/shareddata.img 100M Created fileio shareddata with size 104857600 Note: If you don’t specify writeback=false at the end of the previous command, it is assumed writeback=true. The writeback option set to true enables the local file system cache. This improves performance but increases the risk of data loss. In production environments, it is recommended to use writeback=false. You can create a block backstore that usually provides the best performance. You can use a block device like /dev/sdb or a logical volume previously created (# lvcreate –name lviscsi –size 100M vg): / backstores/block/ create block1 /dev/vg/lviscsi Created block storage object block1 using /dev/vg/lviscsi. Then, create an IQN ( Iscsi Qualified Name) called iqn.2014-08.com.example with a target named t1 and get an associated TPG ( Target Portal Group): / iscsi/ create iqn.2014-08.com.example:t1 Created target iqn.2014-08.com.example:t1.
Created TPG 1. Global pref autoadddefaultportal=true Created default portal listening on all IPs (0.0.0.0), port 3260. Note: The IQN follows the convention of the (see to get more details). Now, we can go to the newly created directory: / cd iscsi/iqn.2014-08.com.example:t1/tpg1 /iscsi/iqn.20.ample:t1/tpg1 ls o- tpg1. no-gen-acls, no-auth o- acls. ACLs: 0 o- luns. LUNs: 0 o- portals.
![Open Open](/uploads/1/2/5/5/125512417/615039388.jpg)
Portals: 1 o- 0.0.0.0:3260. OK Below tpg1, three objects have been defined:. acls ( access control lists: restrict access to resources),. luns ( logical unit number: define exported resources),.
portals (define ways to reach the exported resources; consist in pairs of IP addresses and ports). If you use a version pre-RHEL 7.1 (this step is now automatically done by the iscsi/ create command), you need to create a portal (a pair of IP address and port through which the target can be contacted by initiators): /iscsi/iqn.20.ple:t1/tpg1 portals/ create Using default IP port 3260 Binding to INADDRANY (0.0.0.0) Created network portal 0.0.0.0:3260. Whatever version, create a lun depending on the kind of backstore you previously chose:. Fileio backstore: /iscsi/iqn.20.ample:t1/tpg1 luns/ create /backstores/fileio/shareddata Created LUN 0. Block backstore: /iscsi/iqn.20.ample:t1/tpg1 luns/ create /backstores/block/block1 Created LUN 0. Create an acl with the previously created IQN (here iqn.2014-08.com.example) and an identifier you choose (here client), together creating the future initiator name: /iscsi/iqn.20.ample:t1/tpg1 acls/ create iqn.2014-08.com.example:client Created Node ACL for iqn.2014-08.com.example:client Created mapped LUN 0 Optionally, set a userid and a password: /iscsi/iqn.20.ample:t1/tpg1 cd acls/iqn.2014-08.com.example:client/ /iscsi/iqn.20.xample:client set auth userid=usr Parameter userid is now 'usr'. /iscsi/iqn.20.xample:client set auth password=pwd Parameter password is now 'pwd'.
Now, to check the configuration, type: /iscsi/iqn.20.om.example:d1 cd./. /iscsi/iqn.20.ple:tgt1/tpg1 ls o- tpg1. no-gen-acls, no-auth o- acls. ACLs: 1 o- iqn.2014-08.com.example:client. Mapped LUNs: 1 o- mappedlun0. lun0 fileio/shareddata (rw) o- luns.
LUNs: 1 o- lun0. fileio/shareddata (/opt/shareddata.img) o- portals. Portals: 1 o- 0.0.0.0:3260. OK Finally, you can quit the targetcli command: /iscsi/iqn.20.ple:tgt1/tpg1 exit Global pref autosaveonexit=true Last 10 configs saved in /etc/target/backup. Configuration saved to /etc/target/saveconfig.json Note: The configuration is automatically saved to the /etc/target/saveconfig.json file. You set up the iScsi target configuration and did the first steps of the iScsi initiator configuration until the discovery. You now have to execute the ‘login’ step.
What command to login to the remote resource from the initiator?. # iscsiadm -mode session -targetname iqn.2014-08.com.example:t1 -portal 192.168.1.81.
# iscsiadm -mode node -targetname iqn.2014-08.com.example:t1 -portal 192.168.1.81 -login. # iscsiadm -mode session -targetname iqn.2014-08.com.example:t1 -portal 192.168.1.81 -login. # iscsiadm -mode node -targetname iqn.2014-08.com.example:t1 -portal 192.168.1.81. You may need to set ACL configuration against the target or limit the target to a given IP address or IQN. If you are required to do this you will have 2 options 1) Protect it via ACL using IQN of client = “cat /etc/iscsi/initiatorname.iscsi” on client and add on server in targetcli (quite easy really) “./acls create iqn.1994-05.com.redhat:a1” 2) Protect it via firewall, using standard –add-port will not protect it unless you have specific source address in your zone. If this is the case you will need to use rich rules. The easiest is to use firewall-config as remembering Read more ».
Dear CertDepot, Thanks for your tutorials, they are so informative. But i’ve been struggling with the iscsi-initiator on the client side for a while now. I’ve followed your tutorial from start to finish but anytime i come to the login part on the client side I keep on getting this error “iscsiadm: initiator reported error (24 – iSCSI login failed due to authorization failure).” ” iscsiadm: Could not log into all portals” Do you have any ideas what I might be doing wrong so that i can correct it?
It is driving me crazy! Hello everyone, If you are experiencing an issue during your RHEL training such as: “iscsiadm: initiator reported error (24 – iSCSI login failed due to authorization failure).” ” iscsiadm: Could not log into all portals” It appears to be a bug even in RHEL7 as far as I understand, i am not sure about the upgraded version like 7.2 or software updates may fix it only if you are subscribed to redhat. Now here is the solution: If you are running tests on VMs and your domain is example.com and you have named your machines after the domain for e.g Read more ».
Hello all, The article here is really informative and helpful for the beginners. Thanks for writing in the complete step by step guide. I am new to the environment, and have tried creating the iscsi target on centos 7 based on the inputs given. I am connecting the ISCSI target from ubuntu on client side. I am able to connect to the target, but the drive connected is in the read only mode. I am not able to trace the error I did.
Can you please guide me, where I may be going wrong while making the connection / or Read more ». I’m having the following issue with an iscsi initiator in RHEL7.0 When I reboot without a proper umount and logout – the system just hangs. I guess it is some kind of bug, which I have solved by editing the “/etc/iscsi/iscsid.conf” and setting the logout timer to 1 second (default is 15).
The value is “node.conn0.timeologouttimeout = 1” Once I have edited and restarted the “iscsi” and “iscsid” services – the machine simply reboots and/or shutdowns as expected. Can someone confirm this behaviour??? Relevant packages: initscripts-9.49.17-1.el70.1.x8664 iscsi-initiator-utils-6.2.0.873-21.el7.x8664 iscsi-initiator-utils-iscsiuio-6.2.0.873-21.el7.x8664. I meant that rebooting the initiator (the client) without umounting and logging out of iscsi will cause the machine to stale (never shuts down). And in the hurry – this could happen.
I’ve mentioned this as a precaution – as the script that RedHat will use – probably will not check for iscsi mounts – it will just reboot /this is just an assumption/. And if your machine never comes up – and it doesn’t even properly shut down – then you fail ? No, I didn’t try another length. The default one is 15s. I was thinking about 0 Read more ».
Okay, I have configured both, It seems not that hard. But, there’s a but, after issuing the command below # iscsiadm -m discovery -t st -p MYIP # vi /etc/iscsi/initiatorname.iscsi # systemctl restart iscsi # systemctl restart iscsid # iscsiadm -m node -T iqn.2018.com:server -p MYIP -l # lsblk/fdisk -l I was able to see the disks, the 2 disks I created. Then, fdisk /dev/sdc fdisk /dev/sdd then pvcreate /dev/sdc1 pvcreate /dev/sdd1 vgcreate vgnew1 /dev/sdc1 vgcreate vgnew2 /dev/sdd1 lvcreate -l 100%FREE -n lvdisk1 /dev/vgnew1 lvcreate -l 100%FREE -n lvdisk2 /dev/vgnew2 mkfs -t xfs /dev/vgnew1/lvdisk1 mkfs -t xfs /dev/vgnew2/lvdisk2 mkdir /disk1 Read more ». Okay, server is up now. I don’t like what happened 1st scenario – after I reboot the server it says Rebooting in the console.
It was hanging, so I went to vcenter and force shutdown the server. This is not good, do we have access to the console server of the VM? After I force shutdown, I was stuck in maintenance mode.
So tried to stop the sharing the iscsi and stopping the service in target. It did not work.
Finally I rebooted it again, And I’m in maintenance mode. Put # on the /etc/fstab. My Read more ». It is interesting that you say in the exam, its wise to unmount the remote resource to avoid surprises. Can I ask to what purpose is this?
Surely the whole point of an initiator is to mount a partition on boot and then when rebooting to do this seamlessly without manual intervention? I do ask this though because I encountered a very strange thing in my exam related to the initiator where by after doing all the steps correctly and then doing a ‘mount -a’ where all seemed to be good, a reboot basically broke my client where it would Read more ». I was able to configure iscsi target/iscsi initiator easily, with CHAP. I was able to mount.and when I reboot the iscsi iniator server it is not hanging. If I reboot the target server, then upon boot up it will erase my configuration. Then I can not do a restoreconfig because it is saying /dev/iscsi/disk1 is in use?
Okay, I stop all services from initiatior to target. And yet I can’t do a restoreconfig.
Okay, I tried to lvremove and the (dmsetup remove) the said disk and it is saying the same. It is Read more ».
Okay my question is, why my config is gone after every reboot. I’m asking why, tried recreating them, I don’t mind.
I became a master of iscsi target and initiator. Commands are in my finger now but wondering why? Then one server is up I can’t run the restoreconfig because it keeps saying disk1 is being used, I lsof etc. Nope it’s not being used. Spent 4 hours in google, searching for answer.
Somebody said, fixed the lvm.conf globalfilter, I did and I was able to run the restoreconfig saveconfig. My config is back. But, I don’t want this to Read more ». I am trying to define authentication per acl basis. I have enabled Authentication on per ACL Basis.
Under tpg /iscsi/iqn.20:target8/tpg1 get attribute authentication authentication=0 under ACL: /iscsi/iqn.20al.rhce:test1 get auth AUTH CONFIG GROUP mutualpassword= —————- The mutualpassword auth parameter. Mutualuserid= ————– The mutualuserid auth parameter. Password=username ————— The password auth parameter. Userid=password ———– The userid auth parameter. As per the target configuration, it should only allow access to this acl using mentioned username/password On Client: /etc/iscsi/iscsid.conf If I disable ( # ), chap settings. ( i.e.) remove user/pass settings. #node.session.auth.authmethod = CHAP It should Read more ».