AutoFS and NFS Home
-
@johnhooks said:
@dafyre said:
@johnhooks said:
@dafyre said:
Why are there two NFS servers to start with? (Just curious)
They're only 20-24 drives each. About 50TB per server. All of the engineer's home folders are on them so one isn't enough.
At some point down the road we are going to implement a clustered storage but we just don't have the time right now because of time constraints for this project.
I wonder if something like UnionFS might be helpful here?
It's part of how I get my Plex server to see both files on my local machine as well as files on my ACD drive as if they were all in one folder.
That's a possibility. I'll have to look into it. Sounds similar to GFS?
I'm not sure. It's not really a file system... It's more akin to DFS Name Spaces, I think...
-
@Dashrender said:
As a complete Linux noob here... if you have users \home folders spread over two systems, how does where ever they are logging in know which one of those servers have their files?
You set the remote folders in the configuration. The & in the location is the wildcard character for the username. So I was hoping it would look in one, if it doesn't find it, it would go to the next.
You can also manually add each user's home folder and location, but that's a lot of work.
If it won't work, we can just do a
/home1
and a/home2
. -
@dafyre said:
@johnhooks said:
@dafyre said:
@johnhooks said:
@dafyre said:
Why are there two NFS servers to start with? (Just curious)
They're only 20-24 drives each. About 50TB per server. All of the engineer's home folders are on them so one isn't enough.
At some point down the road we are going to implement a clustered storage but we just don't have the time right now because of time constraints for this project.
I wonder if something like UnionFS might be helpful here?
It's part of how I get my Plex server to see both files on my local machine as well as files on my ACD drive as if they were all in one folder.
That's a possibility. I'll have to look into it. Sounds similar to GFS?
I'm not sure. It's not really a file system... It's more akin to DFS Name Spaces, I think...
Ah ok. I'll look into it. Thanks!
-
@johnhooks said:
@dafyre said:
@johnhooks said:
@dafyre said:
@johnhooks said:
@dafyre said:
Why are there two NFS servers to start with? (Just curious)
They're only 20-24 drives each. About 50TB per server. All of the engineer's home folders are on them so one isn't enough.
At some point down the road we are going to implement a clustered storage but we just don't have the time right now because of time constraints for this project.
I wonder if something like UnionFS might be helpful here?
It's part of how I get my Plex server to see both files on my local machine as well as files on my ACD drive as if they were all in one folder.
That's a possibility. I'll have to look into it. Sounds similar to GFS?
I'm not sure. It's not really a file system... It's more akin to DFS Name Spaces, I think...
Ah ok. I'll look into it. Thanks!
yeah what dafyre was talking about looked like DFS to me.
-
@johnhooks said:
@Dashrender said:
As a complete Linux noob here... if you have users \home folders spread over two systems, how does where ever they are logging in know which one of those servers have their files?
You set the remote folders in the configuration. The & in the location is the wildcard character for the username. So I was hoping it would look in one, if it doesn't find it, it would go to the next.
You can also manually add each user's home folder and location, but that's a lot of work.
If it won't work, we can just do a
/home1
and a/home2
.In windows I can assign a user a homedrive of \servername\sharename%username%
But I don't think there is a way to variablize the sharename itself
-
@Dashrender said:
@johnhooks said:
@Dashrender said:
As a complete Linux noob here... if you have users \home folders spread over two systems, how does where ever they are logging in know which one of those servers have their files?
You set the remote folders in the configuration. The & in the location is the wildcard character for the username. So I was hoping it would look in one, if it doesn't find it, it would go to the next.
You can also manually add each user's home folder and location, but that's a lot of work.
If it won't work, we can just do a
/home1
and a/home2
.In windows I can assign a user a homedrive of \servername\sharename%username%
But I don't think there is a way to variablize the sharename itself
Right. He could fake it with UnionFS or (if stuck in Windows) DFS Name Spaces
-
@Dashrender said:
@johnhooks said:
@Dashrender said:
As a complete Linux noob here... if you have users \home folders spread over two systems, how does where ever they are logging in know which one of those servers have their files?
You set the remote folders in the configuration. The & in the location is the wildcard character for the username. So I was hoping it would look in one, if it doesn't find it, it would go to the next.
You can also manually add each user's home folder and location, but that's a lot of work.
If it won't work, we can just do a
/home1
and a/home2
.In windows I can assign a user a homedrive of \servername\sharename%username%
But I don't think there is a way to variablize the sharename itself
Ya, I could do that with a 2nd home directory and it would be fine. I can only have one * key though, so I would have to set up a new auto.home2 map and have the mount point as
/home2
with the new * key under it.It might not even be worth messing with. Later on at some point we are going to do some kind of clustered storage (gluster or ceph) and it won't matter anyway, we could have as much as we want in one directory.
-
@dafyre said:
@Dashrender said:
@johnhooks said:
@Dashrender said:
As a complete Linux noob here... if you have users \home folders spread over two systems, how does where ever they are logging in know which one of those servers have their files?
You set the remote folders in the configuration. The & in the location is the wildcard character for the username. So I was hoping it would look in one, if it doesn't find it, it would go to the next.
You can also manually add each user's home folder and location, but that's a lot of work.
If it won't work, we can just do a
/home1
and a/home2
.In windows I can assign a user a homedrive of \servername\sharename%username%
But I don't think there is a way to variablize the sharename itself
Right. He could fake it with UnionFS or (if stuck in Windows) DFS Name Spaces
Could you though? I haven't actually used DFS before, but I thought that DFS worked as follows. You create a root DFS \domainname then you create a share within that root space \domainname\usershares then you mount other direct shares to that DFS share which creates a subfolder in the DFS share, i.e. real share \server1\home1 = DFS \domainname\share\home1
So this would mean you'd have
\domainname\share\home1
\domainname\share\home2You'd still have to assign the specific path (\domainname\share\home1 or home2) in the user information.
I could be completely off base on this, if so, please correct me.
-
@johnhooks said:
@Dashrender said:
@johnhooks said:
@Dashrender said:
As a complete Linux noob here... if you have users \home folders spread over two systems, how does where ever they are logging in know which one of those servers have their files?
You set the remote folders in the configuration. The & in the location is the wildcard character for the username. So I was hoping it would look in one, if it doesn't find it, it would go to the next.
You can also manually add each user's home folder and location, but that's a lot of work.
If it won't work, we can just do a
/home1
and a/home2
.In windows I can assign a user a homedrive of \servername\sharename%username%
But I don't think there is a way to variablize the sharename itself
Ya, I could do that with a 2nd home directory and it would be fine. I can only have one * key though, so I would have to set up a new auto.home2 map and have the mount point as /home2 with the * key under it.
It might not even be worth messing with. Later on at some point we are going to do some kind of clustered storage (gluster or ceph) and it won't matter anyway, we could have as much as we want in one directory.
UnionFS would work something like this...
On nfsserver1 in the /data folder...
mkdir otherserver
mkdir allusers
mount nfsserver2:/data/usres /data/otherservermount -t unionfs -o dirs=/data/users:/data/otherserver /data/allusers
Modify the exportfs to use /data/allusers
Then point your software above to nfsserver1:/data/allusers/&
That's the short short, untested highly volatile may melt your face off, or cause your servers to dance with the devil in the pale moonlight, heavily untested version... but an idea, none-the-less.
-
@dafyre said:
mount -t unionfs -o dirs=/data/users:/data/otherserver /data/allusers
I got it.. that's kinda cool, basically fakes a merger of those two folders into a new folder.
-
@Dashrender said:
@dafyre said:
mount -t unionfs -o dirs=/data/users:/data/otherserver /data/allusers
I got it.. that's kinda cool, basically fakes a merger of those two folders into a new folder.
Yepp... so if somebody writes something into /data/allusers/newuser it gets created on the nfsserver1 ...
But if somebody writes something into an existing folder, then it saves it where that folder really lives.
It's ugly, but it does work!
-
@dafyre said:
@Dashrender said:
@dafyre said:
mount -t unionfs -o dirs=/data/users:/data/otherserver /data/allusers
I got it.. that's kinda cool, basically fakes a merger of those two folders into a new folder.
Yepp... so if somebody writes something into /data/allusers/newuser it gets created on the nfsserver1 ...
But if somebody writes something into an existing folder, then it saves it where that folder really lives.
It's ugly, but it does work!
So if you want/need something to go to server2, you have to create the folder first? ok
pain, but maybe worth it. -
@Dashrender said:
@dafyre said:
@Dashrender said:
@dafyre said:
mount -t unionfs -o dirs=/data/users:/data/otherserver /data/allusers
I got it.. that's kinda cool, basically fakes a merger of those two folders into a new folder.
Yepp... so if somebody writes something into /data/allusers/newuser it gets created on the nfsserver1 ...
But if somebody writes something into an existing folder, then it saves it where that folder really lives.
It's ugly, but it does work!
So if you want/need something to go to server2, you have to create the folder first? ok
pain, but maybe worth it.if you want nfsserver2 to be primary, you would change the mount point around...
mount -t unionfs -o dirs=/data/otherserver:/data/users /data/allusers
(note: this would be run from the command line of nfsserver1) -
@dafyre said:
@Dashrender said:
@dafyre said:
@Dashrender said:
@dafyre said:
mount -t unionfs -o dirs=/data/users:/data/otherserver /data/allusers
I got it.. that's kinda cool, basically fakes a merger of those two folders into a new folder.
Yepp... so if somebody writes something into /data/allusers/newuser it gets created on the nfsserver1 ...
But if somebody writes something into an existing folder, then it saves it where that folder really lives.
It's ugly, but it does work!
So if you want/need something to go to server2, you have to create the folder first? ok
pain, but maybe worth it.if you want nfsserver2 to be primary, you would change the mount point around...
mount -t unionfs -o dirs=/data/otherserver:/data/users /data/allusers
(note: this would be run from the command line of nfsserver1)not what I was going for.. I was going for leave the primary where you had it.. but I want to occasionally add a new thing to server 2, not server 1, so I would have to go to the actual share and create the folder manually.
-
@Dashrender said:
@dafyre said:
@Dashrender said:
@dafyre said:
@Dashrender said:
@dafyre said:
mount -t unionfs -o dirs=/data/users:/data/otherserver /data/allusers
I got it.. that's kinda cool, basically fakes a merger of those two folders into a new folder.
Yepp... so if somebody writes something into /data/allusers/newuser it gets created on the nfsserver1 ...
But if somebody writes something into an existing folder, then it saves it where that folder really lives.
It's ugly, but it does work!
So if you want/need something to go to server2, you have to create the folder first? ok
pain, but maybe worth it.if you want nfsserver2 to be primary, you would change the mount point around...
mount -t unionfs -o dirs=/data/otherserver:/data/users /data/allusers
(note: this would be run from the command line of nfsserver1)not what I was going for.. I was going for leave the primary where you had it.. but I want to occasionally add a new thing to server 2, not server 1, so I would have to go to the actual share and create the folder manually.
If you do it the second way I listed, any new folders created under /data/allusers would go to server2 by default. But yeah, you could just as easily create the folders on server2 and set the permissions appropriately.
-
@johnhooks said:
@dafyre said:
Why are there two NFS servers to start with? (Just curious)
They're only 20-24 drives each. About 50TB per server. All of the engineer's home folders are on them so one isn't enough.
At some point down the road we are going to implement a clustered storage but we just don't have the time right now because of time constraints for this project.
Gluster could be done in an hour. I have how tos posted for both NFS Home Automounting and Gluster
-
Gluster would solve this as fast as any of those solutions Faster than this conversation, actually.
Another option is use one as a NAS head and use the other as SAN and present it all as one pool even though it is two machines. But that would give up some of the performance and make it more fragile.
-
@scottalanmiller said:
@johnhooks said:
@dafyre said:
Why are there two NFS servers to start with? (Just curious)
They're only 20-24 drives each. About 50TB per server. All of the engineer's home folders are on them so one isn't enough.
At some point down the road we are going to implement a clustered storage but we just don't have the time right now because of time constraints for this project.
Gluster could be done in an hour. I have how tos posted for both NFS Home Automounting and Gluster
Ha yes anywhere else it would take no time at all. We have so much red tape to jump through it's ridiculous.
-
@johnhooks said:
@scottalanmiller said:
@johnhooks said:
@dafyre said:
Why are there two NFS servers to start with? (Just curious)
They're only 20-24 drives each. About 50TB per server. All of the engineer's home folders are on them so one isn't enough.
At some point down the road we are going to implement a clustered storage but we just don't have the time right now because of time constraints for this project.
Gluster could be done in an hour. I have how tos posted for both NFS Home Automounting and Gluster
Ha yes anywhere else it would take no time at all. We have so much red tape to jump through it's ridiculous.
Start setting up and testing a Gluster Cluster (see what I did there?)... and maybe by the time you get it set up and tested, you'll be done playing jump rope with the red tape.
-
@dafyre said:
@johnhooks said:
@scottalanmiller said:
@johnhooks said:
@dafyre said:
Why are there two NFS servers to start with? (Just curious)
They're only 20-24 drives each. About 50TB per server. All of the engineer's home folders are on them so one isn't enough.
At some point down the road we are going to implement a clustered storage but we just don't have the time right now because of time constraints for this project.
Gluster could be done in an hour. I have how tos posted for both NFS Home Automounting and Gluster
Ha yes anywhere else it would take no time at all. We have so much red tape to jump through it's ridiculous.
Start setting up and testing a Gluster Cluster (see what I did there?)... and maybe by the time you get it set up and tested, you'll be done playing jump rope with the red tape.
The other issue is the NFS servers we have right now are applicances (was done before I got here I've only been here less than a month). We can install certain things, but too much and we might lose "support."
We have to have these inspectors come in and approve stuff if any changes are made to this network. It's ridiculous.