Pay as you grow.
Currently there are several pressing concerns facing IT administrators, with the foremost being, rapid data growth, rapid response to storage requests, and finding a cost effective efficient solution to handle their infrastructure’s needs. Looking at each of these issues in turn, the largest issue would seem to be the deluge of data that is flooding infrastructures. Infrastructures that must process, deliver, manipulate, and store this massive influx of data. The next issue is dealing with the rapid fire requests for storage from their internal customers. This is a never ending struggle, with ever increasing demands on all IT fronts. And finally while the amount of data that businesses are generating is ever increasing, IT budgets are not keeping pace with what traditional storage would cost to facilitate handling big data. Traditional storage solutions scale up which means that heavy investment and massive over provisioning is the solution to capacity planning.
Our mission at Bigtera is empowering enterprises with software-defined freedom. Our mission is derived from the fact that we know that current data center infrastructures have difficulty handling big data and cloud computing, in terms of storage. Current infrastructures are caught in restrictive storage silos with storage hardware typically assigned in a one to one ratio with applications. This results in a huge amount of isolated and under utilized storage. There is also a lack of agility with how traditional storage infrastructures operate.
CIOs and IT staff have been embracing the idea of software-defined infrastructure (SDI), but they have concerns surrounding the transition from what they have to what they want. We understand that transforming an enterprise’s infrastructure is not something that happens at the flip of a switch. We are committed to empowering enterprises with the freedom to define their storage infrastructure so our team of disruptive innovators have focused their efforts around the following core beliefs to facilitate a smooth seamless transition to the next generation of data center infrastructures:
Turbocharge your infrastructure.
Data centers must have the capacity to store, but they must also have the processing power and throughput to handle applications and workloads. Many data centers today must support a variety of performance intensive applications and workloads. These include, but are not limited to, running virtual servers, VDI, and supporting SQL databases. The processing, delivering, and storing of data is not just an issue of capacity. While traditional storage is able to handle capacity, new storage technologies are needed to tackle capacity and performance issues.
VirtualStor™ brings blazing performance to your data center in several ways. First by leveraging SSD, VirtualStor™ improves data workload and application performance. When an administrator enables SSD acceleration, VirtualStor™ accesses randomly stored data, processes the data, then VirtualStor™ writes the data sequentially to HDD. Combining SSD caching and sequential writing, VirtualStor™ can improve traditional HDD storage performance by at least 10X. Administrators can improve performance further by adding more SSD (SATA/SAS, PCIe) or by scaling VirtualStor™ out.
As VirtualStor™ scales out, throughput and IOPS performance also significantly improve. IOPS are the number of operations per second, while throughput is the speed at which data is transferred. The more nodes available for VirtualStor™, the faster that service requrests from applications and workloads can be handled. VirtualStor™ clusters (VirtualStor™ appliances) improve performance by performing transactions within the clusters and by using any free memory available in a cluster to further improve performance. Administrators can also create caching pools specifically for slower storage to improve hot data access performance. The VirtualStor™ distributed file architecture also improves performance when coupled with data replication services. The coupling increases data throughput performance when reading the data. This is because VirtualStor™ intelligently directs requests for data to the closest available replicated data blocks.
One platform to unify and rule them all.
Over time data centers become a mix and match of many different types of storage (SAN, NAS ) from many different vendors. This is due to budgets, availability of storage devices, immediate resource needs, and storage requirement needs. Mixing and matching storage types makes management far more complex as more and more storage devices become part of the data center. This is because storage devices of differing types are constantly being added to data centers. Each type of storage comes with its own management portal or console. Administrators can waste precious time moving back and forth between each product’s console even when performing simple monitoring tasks.
VirtualStor™ provides a unified storage platform so companies do not need to choose between the type of storage they need. As more VirtualStor™ appliances are added, the appliances seamlessly become part of a single massive decentralized storage entity. The single massive storage entity can be partitioned into storage of any type. VirtualStor™ accomplishes this by abstracting the storage hardware from the control layer. VirtualStor™ is an object based storage solution and so all storage devices (SSD, HDD) VirtualStor™ manages become object storage devices (OSD). Administrators can merge several SDD and HDD to become a single OSD. Initially all of the OSD combine to make up one massive storage resource pool (SRP). Administrators can then break up the single massive pool in to multiple smaller SRP by assigning OSD to the SRP that the infrastructure requires. One OSD can belong to multiple SRP. Content addressed storage (CAS) is created by assigning the Amazon S3 or OpenStack Swift protocol to an SRP. Virtual Storage is created by assigning resources from one or more of the SRP. Once assigned VirtualStor™ supports creating network attached storage (NAS) and storage area networks (SAN) that can run simultaneously. These storage types are supported by several storage protocols: NAS (iSCSI, FC), SAN (NFS, CIFS).
Management is kept simple because all of the storage types appear in a single management console provided by VirtualStor™. VirtualStor™ provides a single decentralized management portal for all storage types (SAN, NAS, CAS). The management portal is available as long as at least one of the VirtualStor™ appliances is working. The management portal provides a graphical dashboard for at-a-glance monitoring of storage resources and of the process management for storage devices, and VirtualStor™. Administrators can also set up alerts for proactive monitoring of their resources.
The VirtualStor™ management console delivers complete control over storage resources to administrators. Administrators can intuitively configure the storage type (NAS, SAN, CAS), capacity, QoS (IOPS, throughput), availability of data (for applications and storage), and data services (compression, deduplication, encryption) for each Virtual Storage area. VirtualStor™ also provides open restful management APIs for administrators to seemlessly integrate VirtualStor™ with their management framework for infrastructure or other business processes.
Less is more.
As companies grow, so too does their infrastructure. This requires a significant investment in time, effort, and money, and leads to issues of capacity planning which in turn leads to over provisioning. VirtualStor™ helps to eliminate capacity planning issues and over provisioning. VirtualStor™ accomplishes this by using administrator controlled data service technologies and automated optimization.
Administrators can assign various services on data to virtually extend the available space, with compression, data deduplication, and erasure coding being the foremost. Data compression uses a lossless algorithm to reduce the footprint data requires when stored. Data deduplication is a process where duplicate data is eliminated. This can significantly reduce the amount of space that files occupy. And finally erasure coding is a very efficient form of data protection. VirtualStor™ breaks data down into blocks (file is broken into blocks A, B, C, and D) and distributes the blocks across the storage infrastructure. When erasure coding is enabled, a parity file is created for the blocks (A+B+C+D = parity file). If any of the blocks are corrupt or deleted, VirtualStor™ uses the parity file to recreate the deleted or corrupt data blocks.
VirtualStor™ automates efficient optimization of your storage resources in several ways. First, VirtualStor™ using thin provisioning to provide resources just as they are needed. Second, storage resources are balanced across storage nodes so no single node carries more than their fair share of the load. This extends the life of storage devices. Finally, VirtualStor™ adapts network resources to make the best use of what is available in an infrastructure. If a data center has SSD available, VirtualStor™ utilizes the SSD for caching hot and warm data and fast data processing, while using HDD for cold data or applications that do not need SSD’s lightning fast performance.
Robust and resilient.
Regardless of how well a solution performs, robustness and resilience are critical aspects for any solution. Data and business continuity are the life blood of any business. VirtualStor™ ensures business, data, and application continuity on several fronts, everything from data and service availability to data security.
Data availability is critical for any business. VirtualStor™ data availability functions includes data replication, erasure coding, self-repair, and software RAID features. VirtualStor™ breaks data down into blocks in real time, and then distributes those data blocks equally across VirtualStor™ object storage devices (OSD). Administrators can configure VirtualStor™ to generate up to 10 duplicates for each data block. Each duplicate increases the availability for the data. This means that there is no single point of failure for any of the data blocks. Enabling data replication gives the added benefit of increased application and workload performance when data is read.
Erasure coding offers administrators an alternative to data replication, when capacity usage is critical to the administrator. Erasure coding involves VirtualStor™ creating a parity file, after data has been broken down into data blocks and distributed across various VirtualStor™ OSD. If any of the data blocks is missing or damaged, VirtualStor™ uses the parity file to recreate the missing or damaged data blocks. Like data replication this ensures that there is no single point of failure for any of the data blocks.
VirtualStor™ intelligently monitors each data block across the storage infrastructure. This is done for two reasons: self-repair and snapshot backups. First, if an issue occurs with a device or if a data block becomes corrupt or goes missing (this could occur if an administrator removes a storage device from VirtualStor™), VirtualStor™ immediately generates a duplicate of the block from another data block and saves the new duplicate data block in another location. Second, the blocks allow VirtualStor to create snapshots for data backup and recovery purposes.
VirtualStor™ stores data equally balancing data across each storage entity and storage device. If legacy storage becomes managed by VirtualStor™, data accessed from the legacy storage is automatically stored in new storage managed by VirtualStor™. This automatic data migration Administrators can remove storage devices when they need to be retired or replaced with no disruption of service for applications or data.
To ensure system availability VirtualStor™ uses round-robin DNS and IP takeover services. Round-robin DNS uses a list of IP addresses for servers with identical services. This is done for workload balancing and for handling service issues. If for any reason an issue occurs, workloads cycle through the IP list to use an address that is working properly. With mulitple appliances deployed (typically 3 or 4 or more appliances), if any of the appliances encounter issues, the remaining appliances take over application and workload services seamlessly by taking over the IP of the appliance that encounters issues.
VirtualStor™ can protect data stored in Amazon S3 storage resource pools (SRP) using Intel®AES-NI encryption (when available) or software encryption technology. Encryption can be enabled for critical data or applications, while data that has a lower level of confidentiality can be left unencrypted.
Storage that fits.
Administrators are constantly faced with a number of challenges when trying to satisfy the requirements that their data center needs. Customers have varying requirements for their applications and workloads. Administrators must try and juggle customer requirements with the solutions that are available in the infrastructure. This is where VirtualStor™ shines as a storage solution.
VirtualStor™ is extremely versatile and can be configured to suit whatever environment customers need. Whether storage type (NAS, SAN, CAS), capacity, performance (IOPS, throughput), or data protection are of primary concern for the customer or a balance of two, three or all of them are needed, VirtualStor™ provides the flexibility and agility to deliver.
When it comes to capacity, VirtualStor™ scales out to increase capacity. If administrators want to squeeze the most out of the investment administrators can enable data services (data compression, data deduplication) and data protection (erasure coding, software RAID) features.
If performance is the primary concern, scaling VirtualStor™ out increases IOPS and throughput performance. VirtualStor™ provides SSD acceleration (data caching, sequential write) and cluster caching to further improve IOPS. VirtualStor™ also further improves throughput performance when data replication features are enabled.
Finally, VirtualStor™ gives administrators the freedom to choose the data protection that meets their needs. Erasure coding and the software RAID are more storage efficient, while data replication provides better application and workload performance. All while other data protection features (snapshot backups, cloud backup) support the infrastructure.