[lustre-discuss] ZFS based OSTs need advice

Zeeshan Ali Shah javaclinic at gmail.com
Tue Jun 26 08:53:30 PDT 2018


Our OST are based on supermicro SSG-J4000-LUSTRE-OST , it is a kind of
JBOD.

all 360 Disks (90 disks x4 OST) appear in /dev/disk in both OSS1 and OSS2 .

My idea is to create zfspool of Raidz2 (9+2 spare) which means arround 36
zfspools will be created  .

Q1) Out of 36 zfs pools shall i create all of 36 Pools in OSS1 ?  in this
case those pools can only be imported in OSS1 not in OSS2 how to gain HA
/active/active here=
Q2)  2nd option is to create  18 zfspools in OSS1 and 18 in OSS2 ? later in
mkfs.luster specify oss1 as primary and oss2 in secondary (execute it in
oss1) and 2nd time execute same command on oss2 and make oss2 primary and
oss1 secondary .

does it make sense ? am i missing some thing

Thanks a lot


/Zee


On Tue, Jun 26, 2018 at 5:38 PM, Dzmitryj Jakavuk <dzmitryj at gmail.com>
wrote:

> Hello
>
> You can share 4 osts between pair of oss making 2 osts imported into one
> oss and 2 osts into other oss.  At the same time hdds need to be shared
> between all oss. So in normal conditions  1 oss will import 2 ost and the
> second oss will import   Other 2 osts.in case of ha single oss can import
> all 4osts
>
> Kind Regards
> Dzmitryj Jakavuk
>
> > On Jun 26, 2018, at 16:02, Zeeshan Ali Shah <javaclinic at gmail.com>
> wrote:
> >
> > We have 2 OSS with 4 OST shared . Each OST has 90 Disk so total 360
> Disks .
> >
> > I am in phase of installing 2OSS as active/active but as zfs pools can
> only be imported in single OSS host in this case how to achieve
> active/active HA ?
> > As what i read is that for active/active both HA hosts should have
> access to a same sets of disks/volumes.
> >
> > any advice ?
> >
> >
> > /Zeeshan
> >
> >
> >
> > _______________________________________________
> > lustre-discuss mailing list
> > lustre-discuss at lists.lustre.org
> > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/attachments/20180626/f6433349/attachment-0001.html>


More information about the lustre-discuss mailing list