Hi,<br><br>I don't think you should use rdac path checker in your multipath.conf. I would suggest to use tur pathchecker<br><br>path_checker tur<br><br>Bes gerads,<br><br>Wojciech<br><br><div class="gmail_quote">
On 13 August 2010 16:51, David Noriega <span dir="ltr"><<a href="mailto:tsk133@my.utsa.edu">tsk133@my.utsa.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
We have three Sun StorageTek 2150, one connected to the metadata<br>
server and two crossconnected to the two data storage nodes. They are<br>
connected via fiber using the qla2xxx driver that comes with CentOS<br>
5.5. The multipath daemon has the following config:<br>
<br>
defaults {<br>
udev_dir /dev<br>
polling_interval 10<br>
selector "round-robin 0"<br>
path_grouping_policy multibus<br>
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"<br>
prio_callout "/sbin/mpath_prio_rdac /dev/%n"<br>
path_checker rdac<br>
rr_min_io 100<br>
max_fds 8192<br>
rr_weight priorities<br>
failback immediate<br>
no_path_retry fail<br>
user_friendly_names yes<br>
}<br>
<br>
Comment out from multipath.conf file:<br>
<br>
blacklist {<br>
devnode "*"<br>
<div><div></div><div class="h5">}<br>
<br>
<br>
On Fri, Aug 13, 2010 at 4:31 AM, Wojciech Turek <<a href="mailto:wjt27@cam.ac.uk">wjt27@cam.ac.uk</a>> wrote:<br>
> Hi David,<br>
><br>
> I have seen simmilar errors given out by some storage arrays. There were<br>
> caused by arrays exporting volumes via more then a single path without multi<br>
> path driver installed or configured properly. Some times the array<br>
> controllers requires a special driver to be installed on Linux host (for<br>
> example RDAC mpp driver) to properly present and handle configured volumes<br>
> in the OS. What sort of disk raid array are you using?<br>
><br>
> Best gerads,<br>
><br>
> Wojciech<br>
><br>
> On 12 August 2010 17:58, David Noriega <<a href="mailto:tsk133@my.utsa.edu">tsk133@my.utsa.edu</a>> wrote:<br>
>><br>
>> We just setup a lustre system, and all looks good, but there is this<br>
>> nagging error thats floating about. When I reboot any of the nodes, be<br>
>> it a OSS or MDS, I will get this:<br>
>><br>
>> [root@meta1 ~]# dmesg | grep sdc<br>
>> sdc : very big device. try to use READ CAPACITY(16).<br>
>> SCSI device sdc: 4878622720 512-byte hdwr sectors (2497855 MB)<br>
>> sdc: Write Protect is off<br>
>> sdc: Mode Sense: 77 00 10 08<br>
>> SCSI device sdc: drive cache: write back w/ FUA<br>
>> sdc : very big device. try to use READ CAPACITY(16).<br>
>> SCSI device sdc: 4878622720 512-byte hdwr sectors (2497855 MB)<br>
>> sdc: Write Protect is off<br>
>> sdc: Mode Sense: 77 00 10 08<br>
>> SCSI device sdc: drive cache: write back w/ FUA<br>
>> sdc:end_request: I/O error, dev sdc, sector 0<br>
>> Buffer I/O error on device sdc, logical block 0<br>
>> end_request: I/O error, dev sdc, sector 0<br>
>><br>
>> This doesn't seem to affect anything. fdisk -l doesn't even report the<br>
>> device. The same(thought of course different block device sdd, sde, on<br>
>> the OSSs), happens on all the nodes.<br>
>><br>
>> If I run pvdisplay or lvdisplay, I'll get this:<br>
>> /dev/sdc: read failed after 0 of 4096 at 0: Input/output error<br>
>><br>
>> Any ideas?<br>
>> David<br>
>> --<br>
>> Personally, I liked the university. They gave us money and facilities,<br>
>> we didn't have to produce anything! You've never been out of college!<br>
>> You don't know what it's like out there! I've worked in the private<br>
>> sector. They expect results. -Ray Ghostbusters<br>
>> _______________________________________________<br>
>> Lustre-discuss mailing list<br>
>> <a href="mailto:Lustre-discuss@lists.lustre.org">Lustre-discuss@lists.lustre.org</a><br>
>> <a href="http://lists.lustre.org/mailman/listinfo/lustre-discuss" target="_blank">http://lists.lustre.org/mailman/listinfo/lustre-discuss</a><br>
><br>
><br>
><br>
> --<br>
> Wojciech Turek<br>
><br>
> Senior System Architect<br>
><br>
> High Performance Computing Service<br>
> University of Cambridge<br>
> Email: <a href="mailto:wjt27@cam.ac.uk">wjt27@cam.ac.uk</a><br>
> Tel: (+)44 1223 763517<br>
><br>
<br>
<br>
<br>
</div></div>--<br>
<div><div></div><div class="h5">Personally, I liked the university. They gave us money and facilities,<br>
we didn't have to produce anything! You've never been out of college!<br>
You don't know what it's like out there! I've worked in the private<br>
sector. They expect results. -Ray Ghostbusters<br>
_______________________________________________<br>
Lustre-discuss mailing list<br>
<a href="mailto:Lustre-discuss@lists.lustre.org">Lustre-discuss@lists.lustre.org</a><br>
<a href="http://lists.lustre.org/mailman/listinfo/lustre-discuss" target="_blank">http://lists.lustre.org/mailman/listinfo/lustre-discuss</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Wojciech Turek<br><br>Senior System Architect<br><br>High Performance Computing Service<br>University of Cambridge<br>Email: <a href="mailto:wjt27@cam.ac.uk" target="_blank">wjt27@cam.ac.uk</a><br>
Tel: (+)44 1223 763517 <br>
<div style="visibility: hidden; display: inline;" id="avg_ls_inline_popup"></div><style type="text/css">#avg_ls_inline_popup { position:absolute; z-index:9999; padding: 0px 0px; margin-left: 0px; margin-top: 0px; width: 240px; overflow: hidden; word-wrap: break-word; color: black; font-size: 10px; text-align: left; line-height: 13px;}</style>