Videos
Hyperconverged
Data Storage
Hybrid Cloud
Cloud Hosting
Computer Servers
Data Protection Services
Check out our videos to discover features and techniques that will enhance the security and performance of your Edge, Server, and Storage networks.
Nfina’s Hybrid Edge-Cloud™ Solution
TRANSCRIPT
Did you know that by adopting Nfina’s Hybrid Cloud infrastructure allows you to take advantage of the scale and flexibility of the public cloud while maintaining the security, control, and reliability of on premises storage? It also allows you to pursue a tiered storage strategy. For maximum cost efficiency that the public hyper-scale cloud environment can’t match. According to Gartner, midsize enterprises are hampered in their private cloud modernization initiatives unless they adopt public and private cloud in parallel. That’s why we believe a hybrid cloud approach is the best solution for most midsize companies. Nfina’s Hybrid Cloud customers experience around 50% savings versus public cloud solutions alone. Our hybrid cloud solutions include: infrastructure as a service (IaaS) on premises and in the cloud, geo-redundancy including two sites at no additional charge insuring consistent productivity and safeguarding of your data with timely backup to all storage locations, duplication of your IT ecosystem in our public cloud for complete redundancy, lower latency than the hyper-scale cloud and affordable disaster recovery as a service (DRaaS) with 99.999% uptime for business continuity. Nfina-View software provides a unified dashboard for both your private and public cloud storage, simplifying the management of your IT infrastructure. Nfina-View with built in backup testing provides high confidence that your IT infrastructure is ransomware protected and available when you need it. If a problem with your data occurs, our rapid disaster recovery software allows you to failover with one click. Data is restored in minutes, not hours or days. Nfina provides world class tech support including patch management and ransomware remediation free of charge. We illuminate the hassle of contacting multiple vendors by being your hybrid cloud single point of contact. We deal with vendors so you don’t have to. Feel save knowing that Nfina’s hybrid cloud has your business IT systems protected. Contact us today for more information.
Do you feel pressured into
moving to the public
hyperscale cloud?
Perform a Planned Failover with Nfina-View
TRANSCRIPT
Hi. Thanks for stopping by Nfina Technologies. Today I’m going to give you a quick demo of our planned failover functionality that’s built in to Nfina-View, which is a monitoring management software we have for our storage systems here at Nfina. What you see here is a couple of nodes in this edge cloud hosting cluster. The storage cluster here has these volumes on it that are listed under Zvols.
As far as this failover test is concerned, this particular test volume is what I’m going to focus on. It’s running these five virtual machines on this volume, which you can see here within vCenter on the left. So failover test 1 through 4 and space test and all those virtual machines reside on this failover test data store in Nfina-view.
If I go back to vSphere, you can confirm that failover test 1 through 4 as well as space test reside on that particular data store. The planned failover would be utilized, let’s say, in the event that you proactively want to move your virtual machines from one location to another. The location we’re going to be moving to is actually offsite in a completely different location.
And we’re going to failover to this host here that’s in vCenter, this 233 host. It’s a disaster recovery site. So, when would you use this? Well, it might be of a hurricane is coming or you want to move the VMs to run somewhere else temporarily while you reconfigure the system, something of that nature. To actually perform the failover it’s pretty simple. You’ll go into your Nfina-view interface. If the nodes are in a cluster, then the failover is going to be located at the cluster level, which is represented by this edge cloud hosting with the two-server icon shown here. You go to the failover tab and you find the volume that you want to actually failover.
Now the failover is on the volume level, so any virtual machines that reside on that given volume will actually get moved to the offsite location. To perform the failover, just select the volume, choose failover and that’s going to bring up our failover menu. So there’s a couple of choices here. It’s going to populate automatically with where it’s actually replicating to.
In this case, we’re only replicating to one location. So that’s the only choice. But you do have the ability to replicate to multiple locations. So if you had multiple locations, you would need to select which location you actually wanted to failover to. I have a target choice, which is the iSCSI target that we’re attaching the volume to on the destination side and the source replication.
VIP is what IP address we want to use to replicate back. Once we failover, it will immediately set up replication back to the original system. In this case, I’m going to use the system IP and that will allow it to immediately start replicating back the host options. Here is where you want the VMs to run in the remote location and that will be the 2.233 node over here on the left.
So, with that, I’ll go ahead and click failover. It will bring up this next menu. This is where you have some flexibility to either power on or not power on the VMs, or you can even not register them. It’s going to come by default in the state that they’re currently in. So if they’re currently powered on, power on is going to be checked.
If they’re currently registered, then registrations will be checked here. You also have the ability to skip VM failover altogether that will not do any automation around VMs or you can click the Skip VM power on and that’s just going to clear the check boxes in mass here. I’m going to leave it as default and go ahead and click failover.
So, at that point, it’s starting the failover process. It’s going to shut those virtual machines down. It’s going to un-register them and begin the failover process. You can see in vCenter, it’s un-mounted this data store from both of these hosts on the source side, which means it’s now moved to inaccessible here on the source side. In Nfina-View, let’s check to make sure that any changes have been replicated across that may have occurred during the shutdown. That includes any logging or any data that was written up until the point that the virtual machines were actually shut down. Looks like those changes were replicated across. So, what it’s going to do now is begin to bring the volume up on the destination side.
As soon as the volume is brought up on the destination side, it will immediately go ahead and begin replication back down to the source side even before the virtual machines are brought online here. You can see at this point it registered the VMs and powered them on. You can see our failover test is now mounted here in the destination location and they are now running in the destination location.
Now if I come over to the destination node inside of Nfina View, it failed over to this node. This is the storage node. You can go to failover and see that this DR volume now has these VMs running on here. Now if you want to fail back to the original host whenever you’ve reconfigured or the environmental disasters such as a hurricane or something has passed, you will be able to come back and click failover and that would allow you to fail back to the primary location where it started.
That’s all I have for today. I hope you found this demo useful and you’ll check out more of our demo videos on our site.
from your on-prem location
to an off-site location
Fully Managed IaaS
and DRaaS Solutions
TRANSCRIPT
In 2020, the average ransomware settlement was $233,000, and the average downtime was 19 days. Nfina Technologies is the premier provider of hybrid-cloud platform solutions, offering more benefits than the public hyperscale cloud without the cost and complexity. Nfina’s Hybrid Cloud™ is a fully managed Infrastructure as a Service and Disaster Recovery as a Service solution designed to protect, and ransomware proof your business. Copies of critical data are maintained on and off-site, ensuring information is always available for recovery and downtime is kept to a minimum. Nfina uses a unified management control pane to monitor your Edge and cloud data storage pools. In the event of a catastrophic failure, our Nfina View software, allows you to quickly failover and run from your backups in minutes, not hours or days. We make Infrastructure as a Service, Disaster Recovery as a Service, Managed Services, and System Monitoring easy by combining the entire process into a single vendor solution. Contact us to learn more.
Discover how Nfina can help save money, simplify, and protect your company’s data storage network.
Hybrid Edge-Cloud Ransomware Solutions
TRANSCRIPT
My name is Matt and I’m an engineer here with Nfina Technologies. and today I’m going to be demonstrating how easy it is to recover from a ransomware attack using our edge-view software. What I have showing here is the Edge View dashboard itself. Over here in the left sidebar, I have cloud Cluster one where I have two storage appliances running our Edge-store software.
These two storage appliances are forming a storage cluster, which is serving storage to these two VMware hosts. Here in particular, they are serving this Windows Server data store over iSCSI. I have four virtual machines installed on the data store that is coming from the storage appliances. And I’m connected up to the console of one of those virtual machines here back at the Edge View Dashboard here under the storage tab.
On this particular storage node, I have Intel 3710 pool selected. This is one of our storage pools on the cluster. This is a 12-disk pool. It’s made out of six two-way mirror Vdevs. It’s roughly 4.3 terabytes with about 500 gigs actually being used here. The volume in question here that we’re interested in is this Windows Server backup volume.
This is actually the volume. those four virtual machines reside, it’s a terabyte in size. It has 128 K block size and it’s roughly taking out 310 gigabytes on the storage pool. We can also see from this view that this particular volume has a backup plan configured for it. If we hover over the information icon next to the volume, we can see that there are snapshots being taken every 5 minutes and we’re holding those snapshots for 30 minutes.
And there’s also hourly snapshots being taken that are being kept for a day, replicating those same snapshots to and offsite destination, as well as displaying the latest snapshot that is present in each of those systems. So, what we’re going to do now is go ahead and come back here over to our VMware environment. And what I have actually done here is downloaded some ransomware onto this particular virtual machine, and I’m going to go ahead and run this ransomware.
We’ll just go ahead and click Run. And that’s going to kick off the encryption process of all of our files here on the disk. So, what I have here, I’ll just show you before it actually encrypts here. I’ll show you this PowerShell script because we will actually be able to open it any more once we get it. But as you can see, this has some some commands here, some PowerShell commands related to some performance testing we were doing before.
And it’ll take just a few minutes. I’m going to pause the video while the encryption process finishes. All right. So, the ransomware is finally finished encrypting all the files. We’ll take a look at that same PowerShell script I was looking up for. You can see that the extension has changed to this WannaCry extension. It’s no longer recognized by PowerShell.
If I try to open up with Notepad, you can see that it’s it has been encrypted and we can no longer read the content. Here’s the message. Asking us to pay $300 worth of Bitcoin to this particular address and payment will be raised on this day and our files will be lost on this date. So, it’s at this point in time that we’re actually going to utilize the Edge View software to recover from this ransomware attack.
So back over here under our Edge View Dashboard, we can now come to the snapshots tab for this particular pool, and we can do action here on the volume and choose our action of rollback. We can choose to either get snapshots older than a certain date. I’m just going to choose the last ten snapshots here. That’s going to create a list of snapshots.
And we have the ability to roll all the way back to 205 if necessary. Right now, it’s 346. Just to be safe. I’m going to roll back to 330. It’s about 15 minutes ago and we’ll just go ahead at this point in time, choose rollback. Keep in mind, this is a rollback of the entire volume, which means that all virtual machines on the given volume will be rolled back to this point in time.
So right now, that’s about 10 minutes ago, according to system time here, we’re going to agree to that choose rollback and that’s going to kick off the rollback process. So, the rollback process here is going to go ahead and power off these virtual machines that are already running on the volume before the rollback is actually performed from the storage array side.
So, once we see the rollback actually occur, we will see that the volume becomes inaccessible. So, during the rollback, the VMware hosts themselves will not actually be able to access this volume. And there it goes. There. So, we’ve actually already seen that the rollback process has been initiated on the storage array side. And in just a few seconds we should see a RE scan of the host bus adapters on the VMware side here.
From each of the EXSi hosts, we can see that the re scan of those host bus adapters has been completed. Now it’s just going to wait for the actual data store to mount. So now the data stores been remounted within vCenter after the rollback action was initiated. We should be now ready to go ahead and power these virtual machines back on which point in time we’ll go ahead and check that the failover test’s virtual machine has been restored to its original state.
So, I’ve issued the command, the power on all the virtual machines, on the data store. So we should, at this point in time, be able to go ahead and reconnect here and we’ll go ahead and log in. And as we can see, there’s no more ransomware on the desktop. We can see where we restored to the time where we actually had the ransomware in a zip file before we actually executed it.
And we can see our PowerShell script here from the C drive that we demoed whenever we encrypted files.
Our fully managed solution for protecting and ransomware proofing your business.
Ransomware Proof
Your Business
TRANSCRIPT
Hello, my name is Matt and I’m an engineer here at Nfina Technologies and today I’m going to be giving you a quick demo of just how easy it is to roll out of a ransomware attack using our Edge Store™ Storage Appliance. For the demo here what I have is a VMware® host that’s running our Edge Store™ Storage Appliance and we also have two Windows® 10 virtual machines that are running on the host. The Edge Store™ Storage Appliance is delivering a Datastore, Edge Datastore, back to the host over iSCSI and these two Windows® 10 virtual machines are actually residing on that Datastore.
If we come to the Edge Store™ Management Interface, we can actually see that we have the Edge Datastore here that’s delivered over this iSCSI target and we also have SMB file-share configured here. We’re taking array level snapshots every 5 minutes on both those volumes and we’re holding those for an hour, which we can see over here on the snapshots tab of the pool. There is actually already a few snapshots that have already been populated here. So, what we’re going to do for the demo is go ahead and open up these two Windows® 10 consoles and we will be able to see that I have the file share mounted to each of these virtual machines. There’s a bunch of user files that I’ve copied onto each one. They all contain the same data and on the C: drive, I’ve also created a lot of files on each virtual machine.
I also have already downloaded the WannaCry Ransomware onto each of these virtual machines, which we’re going to go ahead and execute at this time. So, we can see once we execute the ransomware, the executable expands out into all of these files. We can see that command prompts begin to run and the ransomware is actually change the background image on each of these virtual machines. We can then see that the payment portal goes ahead and pops up and it’s going to tell us that our payment will be raised if we don’t pay by this date, we have this amount of time left to pay and it’s just going to completely, will completely, lose our files if we don’t pay by the bottom date here. Now, there’s instructions over here and ways to contact, where to pay using the Bitcoin, and how to decrypt here once you actually make your payment. So, our background has been changed and we see the payment portal has appeared on the screen, but let’s go ahead and take a look at our actual files here on each of these machines. So, we’ll start here with the C: drive and we can see that the extensions on these have been changed to WNCRY files and if we go ahead and try to open up one of these files using a text editor than we can see that the contents are scrambled with the WannaCry message. Same is going to be for the actual files on the Fileshare itself. So, it extends not only to the C: drive files, but also any mapped network shares it’s going to go ahead and extend out to those and encrypt those files well.
So, in our case here since we’ve been taking snapshots this is going to be fairly easy for us to actually roll out of so what we’re going to go ahead and do is exit out of these. I’ll go ahead and power off both of these virtual machines and we’ll go ahead and browse up here to our EdgeStore™ virtual machine. So right now it’s 12:47. So, what I’m going to do is roll back to at least 12:35 because I know that that’s a safe time that the malware was not on these particular virtual machines. I’m going to go ahead and hit roll back on that as well as the 12:35 on the Fileshare. We’ll get our messages that the rollbacks have occurred.
What we’ll do is come back over here to our actual host, we’ll give a rescan here on the iSCSI adapter, and we’ll go ahead and power these virtual machines back on. And we can see here that it’s come back to the state before we actually executed the malware. Windows® Defender came back on, I had it disabled before and it detects that this ransomware file here on the desktop is there, so that’s what that message was about. But if we take a look at our C: drive here, we can see that these are not encrypted, they are text files, BMP files, whatever they are. They’re not encrypted with the WannaCry ransomware. Also, if we were to browse here to the Fileshare itself then we are able to see that the Fileshare files are no longer encrypted either. They were rolled back out of the ransomware.
Learn how to easily restore your data after a ransomware attack using Nfina’s EdgeStore™.
Nfina Edgestore™
Demo
TRANSCRIPT
Hello. My name is Matthew Monday, and I’m an engineer here at Nfina Technologies. Today I’m going to give you a brief introduction to our ZFS based storage solution known as Edgestore. The solution we can run it either hyper-converged under VMware, Windows or on a traditional converged architecture on bare metal hardware. I’m going to go over some of the features here of the actual storage appliance itself.
I stated before, this is going to be a ZFS based solution. So with that, we’re going to get data and metadata and metadata check summing which means that every single block on the array is going to have a checksum that silent data errors, bit rot, things of that nature can be detected via the scrub utility, which will actually go through and read every block and compare the block’s checksum against the checksum on disk. It’s also going to include atomic transaction writes, which means that your writes are either going to occur or they’re not going to occur. There’s never going to be a point in time where you actually have to run something like check disk or FSCK to repair the file system because a write was acknowledged before it was actually committed to disk.
As far as protecting your data, the array is going to be able to do two-way mirroring, four-way mirroring, single, dual and triple parity RAID which is going to be known as Z1, Z2 and Z3. Those are going to be the equivalent of RAID five, RAID six, and the nonexistent RAID seven in the hardware RAID world. It’s also going to come with native backup and DR built in. So, we’re able to actually replicate the data from your iSCSI or your fibre channel LUNs or your NFS or SMB shares offsite to a secondary location or onsite to a backup box that you have in the same location.
It’s going to be a highly available solution. So, it’s going to be a dual controller, which means that one of the storage controllers can actually be powered off, shut down or completely fail, and the other controller will be able to take over the resources and continue to serve IOs to the client systems. So the system can be configured active- active or active-passive, depending on the set up, there’s different architectures where you can actually have a shared storage architecture where all the disks in the system are SAS and the two storage controllers are able to access all disks in the system via dual ported SAS drives. Or there’s a metro cluster option where the data and the two storage controllers are actually kept in sync via a Ethernet link that copies IO in between the storage controllers. This link is either 10 gig, 25 gig, hundred gig, depending on the actual setup. We can deploy this as a virtual storage appliance under VMware, Hyper-V, or we can deploy it directly on bare metal hardware.
As far as accelerating the storage performance in the system, we have a couple of mechanisms built in. So, the first is going to be what’s known as the write log or the ZIL. This is going to accelerate random writes through the system. So essentially any random writes are going to go to the ZIL before they’re actually flushed out to the pool as a sequential transaction. The ZIL is typically going to be a high speed SSD or NVMe device. Read caching is going to be twofold, so you’re going to have first level and RAM, which is known as your L1 arc. So essentially all the RAM that you assign to these appliances, the majority of it is going to be utilized for a read cache here. So, the more RAM, the better. Second level read cache can either be an SSD or an NVMe device and this is essentially going to be an extension of the RAM cache. It’s going to be the caching mechanism there on the cache is a combination of your most frequently and most recently used. We typically see very good performance out of the read cache there. One thing to note here on a rebuild that I like to note is going to be that when you have an actual disk failure in the system, the rebuild will only rebuild what’s the data that’s actually on the disk. If you’ve ever rebuilt a drive in a hardware RAID controller, even if you have a drive failed within, let’s say a few minutes after you created the array and you put the disk back in the array and there’s, there’s really no data even on the array. The rebuild is still going to take quite a bit of time because it rebuilds the entire drive. The hardware RAID controllers aren’t intelligent enough to know where the data is on the actual disk, whereas ZFS will be able to detect that there’s very little data on the disk if there is, and only rebuild the data instead of the whole disk.
Compression de-dup are available on the system. We do support thin provisioning on the actual LUNs on the array. We have the ability to take snapshots and clones of the volumes on the array. And as far as our access protocols from the client systems we support, iSCSI, fibre channel, NFS and SMB as the actual client access methods. With that I’m going to jump into what a typical architecture will look like for the full data protection here. So typically you’re either going to have a converged or hyper-converged production dual node system here on site, and you’re going to have what’s known as Zpools that are actually created within the system. And these pools are going to be able to reside on either of the two nodes that are on the production site. These pools can failover from one node to the other. An active-active setup. You would have one node that typically resides on each system with the ability for both pools to move to one system and then in the event of maintenance failure, anything like that. With that, we’re going to be able to replicate the data that resides on the production system here to a backup pool that resides on a separate piece of hardware here at the local site and then we’re also going to be able to replicate that data to a remote backup server that is offsite. So, with that, you’re able to protect yourself from viruses, data corruption, disk failures, rebuild failures, system failures, a natural disaster, theft, human error or downtime here. So as far as the demo here, what I’m going to show you is close out of that.
I’m going to show you here the Hyper-converged solution. It’s going to be I have two hosts here in the demo cluster under VMware and also have these two storage appliances. So, the storage appliance here is tied to 17216 128. And this storage appliance here is tied to 17216174. These nodes are actually in control of the storage that is inside of both of these hypervisors and these are going to be the management interfaces for those particular VMs. So, as you can see here, I already have one storage pool available which is actually providing the storage for these four failover tests VMs here.
So, I’ll just give you a quick rundown of what the solution looks like as a whole. And we’ll go through and create a pool and a volume and setup of a backup plan. So, through the management interface here, we’ll go ahead and take a look at the failover tab because this is actually how the nodes are clustered together to begin with. So essentially, they’re joined together, as you can see, I have edge one and edge two. They’re able to reach each other. My failover status is started here and whenever you create the pools on top of the cluster, they will show up under your failover resources and this is this cluster. Here is what actually controls the movement of the storage pools between the nodes. So, we have rings which are where our cluster heartbeats and information is exchanged between the nodes and then we also have our mirroring path.
So, as I mentioned before, this architecture here that I’m demoing for you today is actually utilizing the cluster over Ethernet architecture. So, my data is actually mirrored between the storage nodes via this link here. So, what I’m going to show you is I have this storage pool here where I’m actually utilizing or providing this Windows Server datastore. What I have here is you’re able to come under the pool level. So, you create a centralized pool. You have 1.45 terabytes of storage that’s centralized. and then from there, we’re able to create what’s known as Zvols. Zvols are going to be utilized for block storage. So that’s going to be either your iSCSI or fichannel volumes that you want to actually deliver to your clients. Your NFS or SMB shares are going to be on what’s known as a data set. So, from a data set, you create the data set, and then you’re able to actually create either the SMB or NFS shares on top of the data set. The virtual IP is an important concept for the IP based access methods. So NFS, SMB and iSCSI because this is going to be how the dual node system actually provides a singular IP address for the clients to access over. So, no matter which node the pool actually resides on. So, with that being said, these virtual IPs map to an interface on both nodes. So, you pass this IP to the client system and then no matter whether Edge one or Edge two actually is in control, the storage pool, they’re still able to stay connected through this singular IP right here. We do support Active Directory as well as LDAP for authentication on the SMB or NFS shares if that’s necessary. As you can see here, I’m connected to an Active Directory server in my lab and then if you don’t have Active Directory, there is LDAP for available to create a local-users in groups.
As far as authentication on the iSCSI side CHAP and mutual CHAP are available for security and there are IP and user restrictions on this and NFS shares as well. So, I’m going to create a new pool here just to give you a quick demo of how the centralized pool was actually laid out. So, I’ll go ahead and create the new pool here and just click add new Zpool all the disks that are in your system will be available at that point.
What I’m going to actually use is a four-way mirror here. So, I’m going to utilize these four disks because these are all the same disks. They’re 1.46 terabytes. I’m going to put those in a single mirror. I’m going to click next. Then you’re going to be able to add your write logs in here. I’m going to click these two disks, one local, one remote. I’m going to add those in as the mirror. And then you’re actually able to add in an SSD read cache here, which I’m not going to do because these disks are actually already SSD. So, reading directly from the disk would be the same as reading from the cache. There’s really no benefit in adding that in we just for spares on the actual pool. So, you can go ahead and assign spares to the pool for in the event of disk failures. And then I’m going to call this pool demo pool and we’ll go ahead and create that pool there. We click add wait just a second for the pool to be created and then the user interface will reload and we should see the storage pool available here. So, now we see our stored our demo pool that we just created as well as the VMware pool that I already had on the system. You know, this pool is going to come in, we’ll be able to see our disk layout on the pool we just created. So, it’s made up of two local and two remote disks and as well as the local and remote for the actual write log here, we don’t have any volumes created for our Zvols calls for iSCSI fibre channel or anything set up for the shares. Just I will go through and just go ahead and show you how to create an iSCSI target. It’s fairly easy. We’ll just go ahead and click next to create a new volume. You would create Click Add New Zvol. I’m going to call this demo vol and then you’re able to add the sign of the size here.
So, I’m going to show that we can do the over provisioning as well as thin provisioning here. So, I only have 1.4 terabytes of space on this, but I can assign five terabytes to this volume. You have our settings as far as compression block size. So, you have some options here for block size, depending on the system, you’re actually going to attach it to. Typically, the defaults are fine here. So, we’re at 4.88 terabytes will click next. As I said before, we’re able to set up mutual chat IP restrictions on that, which I’m not going to do for the sake of this demo. So, at that point, our volume would be created here on this new pool that we had. And if we had a client system that we wanted to attach to it via iSCSI, we could here, as you can see, I said, you know, we created this to be five terabytes, roughly. But you can see the physical size here is only actually utilizing 56 kilobytes here on the array. So that’s going to be your thin provisioning here versus thick that that space, the logical sides would need to be available here on the actual storage capacity. As far as the native backup and disaster recovery options, I’m going to show you how to set up after you create a volume here. What you would want to do is create a backup and recovery plan. So that being said, we’ll go ahead and click Add Replication Tasks. You come to backup and recovery, add replication task, you’ll be able to click Browse. And we created this in demo pool. So, this is our resource. So, we’re able to see all the Zvols in the system and you can see I already have one that’s already configured in this task so it won’t allow me to do so. We’ll click this demo pool, demo Zvol, and then we’re able to create a plan here to actually take snapshots and replicate the data on that volume we just created.
So, the way it works here on the retention rules is you take a snapshot every X amount of time and you keep that snapshot for a Y amount of time and you can have many rules here. So, you could say the default here, you’re going to have an hour’s worth of snapshots that are taken every 5 minutes. You’re going to have three days worth of snapshots that are taken every 15 minutes, and you’re going to have two weeks worth of snapshots that are taken every hour. So, we’ll just blow a couple of those away and we’ll just roll with an hour’s worth of snapshots that are taken every 5 minutes here on the source, and then we can actually go here and replicate this off to a secondary location.
Now, if I want to go here to, let’s say 175, which is another system in my lab, then I would be able to go here to that system and actually replicate to one of the volumes that resided on that system. I’ll click Apply and then you’re actually able to set up a new plan to replicate those to that system. So, let’s say you had a secondary site and you actually wanted to replicate that just daily. So, then you want to keep that for one month. You would be able to replicate the source volume too, you know, a secondary location here at a different interval than you had on the source. I’m not actually going to replicate to that volume because there’s actually some testing going on that particular volume at the moment. So, you’re able to as far as the you click next, then you’re actually able to register vCenter if you want to include VMware level snapshots and store inside of the array level snapshots. Typically, this isn’t necessary, but some users may require it. You’re able to enable a mbuffer, which is a network buffer that can improve the send and receive performance. When you actually replicate to a secondary location and you’re actually able to add in a task description to identify here. So, we’ll just go ahead and add that in and we’ll wait a few minutes and it works on the clock here. So, every 5 minutes we should see after 5 minutes, we should see our snapshots populate here under the snapshots tab of the demo pool.
While we’re awaiting there, I will go ahead and just show you the actual failover here. So, I know both of these polls are on the same node, but the cluster is running here. So, we’re able to freely move this pool here. So, demo pool, it’s showing up as active on edge two but I clicked move so this poll should export here and be imported on this node. Now as you can see, it is imported on this node. So with that being said, that backup task now moves over to this node and no longer shows on this node. So while we’re waiting on that to populate, I will show you, give you a quick demo of a restore here under the VMware pool, because I actually already have a task running on the Windows Server LUN, which is this datastore under VMware that houses these four VMs. If I were to go to this Windows Server datastore, I would see that it’s running 1,2,3,4 VMs right here. So, to actually do the restore here, which you’re actually wanting to do, is go to your snapshots tab and you’ll be able to come here and view the actual snapshots that are available for the given volume. So, I can see I have snapshots that date back to 12/8, which is about six, seven days ago. And then I have some snapshots in the 30-minute range up here from today. So, to actually restore from one of these snapshots. You have a couple of options. If you come to the snapshot itself, you’re able to click the details here and see what’s written and what’s used on the snapshots and then you’re actually either able to clone the snapshot or roll the entire volume back to the snapshot date. So the roll back is going to be a drastic operation that would take all four of these virtual machines back to this given point in time versus if you needed to do a file level restore inside of the virtual machine or you needed to only restore one virtual machine, what you would want to do is actually clone the volume. I’m going to demo clone and then you can directly attach it here to your target, which I have. This target is already connected to my VMware environment. So then at that point I have that demo clone that I just created. It’s showing up here, it’s attached to my target. Now I’m able to actually come over here to my cluster just to rescan storage on the cluster level. That’s going to go ahead and scan both hosts for the storage device that we just connected to. The target, it was that 13. So, I can see this, it’s actually the same size as 14. So if we just go to new datastore, whenever it scans for the new datastores that create, it’s actually going to detect that there is a UUID conflict with a datastore that’s already mounted in the system because the clone that we just created is an exact copy of the Windows server datastore that’s already mounted in the system.
So, with that being said, whenever you want to mount a clone volume while the original datastore is already mounted in the system, you need to re signature the volume in order to get it to mount. So, we’re just going to click assign and you signature here. Then we should see this datastore mount after the volume is re signature here all we’re waiting on now. Let’s go ahead and show you what’s inside of Windows Server so we can see that there’s failover test, one, two, three, four and I can see that the snapshot did mount here. So, when it comes into the system, it’s going to show SNAP and ID and then the original datastore name. So, if I take a look here, that’s also going to contain those four VMs from the point in time of the snapshot.
So that being said, I could either completely restore all four of these VMs or I could individually add one of these VMs in the inventory or attach a VHDX file to VM to gather information off of it if necessary. So, I’ve covered iSCSI storage as it pertains to running virtual machines in a VMware environment. Now I’m going to jump into the file sharing features of the actual storage array itself. So, I’ve already mentioned before we support both SMB and NFS as our file sharing protocols. We have the ability to make those shares directly on the storage pools within the array that allows you to get rid of your typical file server that may be running. Typically, this would be a Windows based machine, so there’s not going to be any need for that one anymore because you can deliver the file shares directly from the array. As I said before, we support Active Directory and LDAP for authentication on the file shares themselves and so you actually create the file share. You’re going to come to the storage pool level and click the shares tab. After you click the shares tab, you’re going to create what’s known as a dataset by clicking add dataset, you would give it a name, choose a record size, cache settings or typically okay, turn access time on or off. We support quotas and reservation which can be set here on the dataset level and then you click add and we add in a dataset, which I have an example here, test two, which I’ve already created. I can see the A next to this, which means I’ve already configured a backup task on this given dataset. And then once you have a dataset, you’re able to create the file shares on top of a given dataset. So, I have the shares below here. I’ve already got a share named edge file share and it’s being shared out over the SMB protocol at the moment. If I take a look at this file share, I can see that I have a backup and recovery task already set up on the file share. Its name as Test two and the retention plan on it is to take a snapshot every 5 minutes and hold the snapshots for one hour.
It’s important to note here that we want to access this fileshare using the virtual IPs that are assigned to the pool. So typically, you’re going to add another virtual IP to the pool where your users would actually be able to access the file shares and map those to their local PCs. So, in this case, I’m using 172161253 as my virtual IP that I want to access the fileshare over. So, let’s go ahead and browse to that location using File Explorer. If I go to here, I can see I have edge fileshare. So, it’s access much the same way that your typical SMB shares are accessed. If I come to the snapshots tab here. I’m able to go down to the datasets and see that test two here has a task and it’s taking snapshots at a rate of every 5 minutes and holding those for an hour. The nice thing about the housing the file shares on the storage array is the snapshots are also viewable by right clicking on the fileshare itself, browsing the previous versions and there you will also see the array level snapshots populate within the properties tab. I would go into the fileshare. I can browse around and let’s say I jump into the schematic folder and let’s accidentally delete the archive file from the schematic folder. I browse to the schematic folder and I say previous versions. I can see that one minute ago there was actually a snapshot taken. Let’s see if that file resides in there. So, one minute ago there was actually a file called Archive which I just deleted. So, at this point I can actually copy that file back into its correct location or I can restore the entire folder to that point in time. So, I’m just going to choose Restore here at that point, it’s going to roll back every file inside of this folder back. And as you can see, we’ve gotten our archive file back in the folder.
Nfina Edgestore™ is a high availability backup and disaster recovery software solution.
Scaling The Edge
with Nfina
TRANSCRIPT
Opportunities at the Edge
We are now firmly in the Cloud Computing Era. However, most new opportunities for the Cloud lie at the “Edge.” “Edge Computing” is a distributed architecture… Where data gets processed closer to a user’s device than the cloud. Why are these “Edge Solutions” essential for today’s data management?
1. Added Security and Compliance – Edge Devices can build security into simple protocols to compensate for non-encrypted IoT devices.
2. Faster Response Times – Without a round-trip to the cloud, data latency is reduced with edge computing.
3. Redundancy – Continuous backups can be easily cloned and made active in the event of an outage.
4. Dependable Operation – Edge devices can operate without disruption even when they’re offline or connectivity is intermittent.
5. Reduced Bandwidth – Eliminating the need to send data back and forth can diminish the drag of IoT and edge devices.
IoT technology is arriving quickly. Gartner Research forecasts that there will be 25 Billion IoR and dege devices by 2022. Expectations for speed and performance will continue to expand. Edge computing will complement and enhance cloud computing.
Contact Nfina today for more information on how to build fast, integrated and cost effective edge solutions.
Discover why Edge Solutions are essential for today’s businesses data management and security.
Nfina
Advantage
TRANSCRIPT
The volume of data is growing exponentially.
Storage requirements are changing quickly. But why are today’s vendors swaying customers into accepting sub-optimal outcomes? Increased cyber-security threats. Increased warranty & support costs. Increased total cost of ownership.
The truth is, customers do not have to accept this. Come to Nfina for clarity. Nfina is Lean. Nfina products include only the essential required applications and the OS…making them free from any bloatware, spyware or adware.
Nfina is open. Plug-and-play with multiple leading platforms such as Windows, Linux, and VMware…an open-systems approach evades one-vendor solutions lock-in.
Nfina is Flexible. Hyper-converged solutions for storage on VMware or Windows Hyper-V…Nfina’s hardware is very open and virtually software-agnostic.
Nfina is Reliable. Nfina’s products have recorded MTBF in excess of 2 million hours…making them some of the most reliable storage equipment in the market.
Nfina is Committed. A market-leading 5-year warranty vs. market standard 3-year warranty…helps lower the Total Cost of Ownership.
To learn more about complete server and storage solutions for Nfina Technologies: go to www.nfina.com
Storage needs are changing quickly. Is your vendor offering the best solutions for your business?