With numerous storage options available on the market, find out why French company Atreid's technical team headed up by CTO Florent Le Duc in their latest article, decided to make GB Labs their preferred supplier.
Storage is a huge topic, but it is an essential component of any configuration, whether it is autonomous, independent, or a member of a network of machines. The diversity of solutions and uses leads us to make as detailed a point as possible on the different storage technologies available to date both for individual use and for collaborative use.
We will thus distinguish 3 main categories of storage:
- DAS (direct attached storage) or storage in direct attachment.
- NAS (network area storage) "general public" or democratized.
- Production network storage (Professional NAS or SAN)
The different types of DAS
This most common storage category includes the following models:
- Ultra compact discs
- Portable discs
- Desktop disks
- RAID disks
Ultra compact discs
Historically there have been small storage units (USB keys, flash cards, etc.). The progress made in the field of flash storage and in particular on NVMe memory now makes it possible to combine compactness, capacity and performance.
For a few years now, there have been products like the Samsung T5 (T7 in 2020) or the LaCie Portable SSDs which allow an average read / write performance of 540 MB / s while retaining smaller dimensions than most smartphones.
These discs are not necessarily the cheapest to the TB but represent the best tool to quickly backup the contents of his laptop and daily data. The capacities to date go up to 2 TB and can now be physically encrypted to benefit from foolproof security in the event of loss or theft.
The USB 3 Gen2, USB-C and USB Type A connections give them great versatility and interoperability.
Certain products like those of LaCie benefit from both a long-term warranty (5 years) but also a data recovery service in the event of a hardware failure. We consider them as very good backup tools for mobile use.
Portable or transportable discs
We will voluntarily ignore the USB 2/3 disc that we found in the supermarket around the corner because they generally represent the worst in this market. Their only attraction is their price.
These mostly mechanical discs (trays) are slow, fragile and generally the USB connection is soldered, which makes data recovery difficult if it is broken or defective. These are discs considered mobile but they only have the name.
LaCie was a real pioneer with its (famous) Rugged range over ten years ago, recognizable by its orange protective bumper. These 2.5 "discs had the great advantage of being shock and water resistant at the time, which made them extremely popular in the field of photography, filming and post-production in mobile conditions.
These products have evolved by integrating either a hardware RAID, integrated memory card readers, or the end of the end with NVMe storage.
Manufacturers like G-Technology entered this market but ten years later.
The Rugged which was initially a solid but slow USB 2 mobile storage has become a solid and extremely fast product with the arrival of the Rugged SSD Pro (up to 2800 MB / s in read mode) allowing to support very demanding workflows in bandwidth and on the move. It is also the perfect cache on Mac for DaVinci Resolve or the Adobe Creative Cloud suite.
Other slightly different products such as the Rugged RAID Shuttle have arrived on the market recently. This product originally thought for filming found other uses. It allows you to benefit from 8 TB of capacity in RAID 0 and corresponds to the size of a 13 "laptop or a Fedex envelope. It quickly became essential for transfers between the different entities of a production because it combines good compactness, decent performance and suitable robustness.
It is the same for the Rugged BOSS SSD which allows to save the rushes directly on the filming location without going through a computer for transfer, while keeping the compactness of a portable disc.
The extreme level of reliability as well as the extended guarantees associated with data recovery services make it a range that we particularly recommend.
Other manufacturers offer equivalent or competing products, however, experience is not as convincing in terms of reliability and especially customer service in the event of a breakdown.
The main characteristics of desktop hard drives have changed little over the years. What will mainly define them today lies in three points: capacity, connectivity and design. A desktop drive ultimately comes down to a 3.5 " SATA drive connected to a USB or Thunderbolt controller (for serial connections). These boxes are single discs and therefore by definition limited in terms of capacity, performance but above all security.
The plate disc has evolved a lot in recent years in terms of technology. The only possibility for manufacturers to counter the inexorable advance of SSDs was to work on capacities and cost per TB. It is now possible to obtain hard disks up to 16 TB at a "reasonable" cost. The performances are of the order of 230 MB / s in read / write, which is completely correct for common uses but can quickly become critical in post-production because the performances collapse on simultaneous access and are insufficient to support most current UHD codecs.
Consider using these discs as a backup or archive copy. In this case, at least double copying becomes essential because a hard disk, whether flash or mechanical, is by definition extremely fallible. Double copying is probably the most economical solution to start with. However, it is extremely restrictive and time-consuming.
In addition, it is not reliable in the long term because the more a disc ages the more likely it is to drop and these products are designed to work. Storing discs on the shelf (especially if stored on the shelf) leads to a high failure rate that increases year after year.
Using a "bare" hard drive in a good quality dock like that offered by OWC or a conventional external drive in a case has no technical difference. The difference will be on how they are stored and handled. The "bare" disc is of course much more likely to take a shock if handled daily.
To date, LaCie has won the prize today for desktop drives. The d2 range is particularly practical and pleasing to the eye. Like the other products, it has the 5-year warranty associated with the DataRescue and offers versatile and robust Thunderbolt 3 / USB C boxes. The internal drives are of the enterprise range which guarantees them a relatively higher lifespan than the desktop drives of the competition. The d2 Thunderbolt 3 range is to be preferred because it can easily be integrated into a chain of peripherals, unlike the d2 Professional model which has the color of iMac pro but only a single USB-C port which does not allow any serial connection.
The main types of RAID:
RAID 0: a minimum of two disks are aggregated to combine their performance. This system is the fastest but has an extremely high risk of failure because if a disk drops all the data is lost. This type of RAID should be avoided if your data is not permanently saved on another secure device.
RAID 1: often used on small disks or system disks. It consists of two mirrored discs. If one drive fails, the data will still be accessible on the other. This type of RAID array does not improve performance but secures data. A RAID box of two disks advertised at 16 TB, for example, will have 8 TB of useful capacity in RAID 1, 16 TB in RAID 0.
Experience shows that when one of the two drives fails on a RAID 1, the other will follow quickly, especially if it is an SSD. We recommend in the event of failure of one of the discs to provide for a replacement of the second disc after changing the first.
RAID 5 & 6: a minimum of 3 disks is required. This type of RAID allows you to combine the performance of each disk while being secure since a disk can drop without losing data. It is the RAID most frequently used in post-production.
However, if your data is not backed up at all, we recommend RAID 6 which uses two drives for parity and therefore offers better fault tolerance.
On most RAID 5 & 6 systems it is possible to allocate one or more "spare" disks which will take over automatically in the event of failure of one of the disks in the RAID array. This induces to "sacrifice" one or more locations in the RAID chassis but can prove to be very useful for very sensitive or poorly monitored systems because the controller starts the reconstruction as soon as a disk drops without waiting for a manual intervention to replace the mechanical defective.
As explained for RAID 1, the disks tend to drop in cascade because often manufactured at the same time and totaling the same number of hours and the same use. This kind of phenomenon is often observed on aging systems. If you find yourself from 5 years of age of the system to replace the disks too regularly, it is that your whole disk array is at the end of its life. Its reliability and performance are no longer there.
There are three main families of RAID disks: Software RAID disks, Hardware RAID disks with external controllers, and Hardware RAID disks with integrated controllers.
Software RAID disks
This type of boxes does not include an internal RAID controller. Manufacturers like OWC often offer this type of solution because it is more economical. It can be interesting for an SSD cache for example which would remain permanently on a machine. Otherwise, it is to be avoided because the RAID management software must be installed and activated on the computer on which the disk is connected and the risk of losing the RAID array, so the data is higher.
Hardware RAID with external controllers
This technology is used to manage SAS boxes most often or a set of disks mounted internally in a PC. It is a PCIe card or a Thunderbolt box which manages a set of hard disks connected in SAS or SATA. This flexible and efficient solution is widely used on production NAS and large calibration systems because it allows fine optimization of the RAID array (s) and can manage a relatively number of disks and arrays.
With the advent of the thunderbolt, it was gradually replaced by boxes or the controller is integrated, easier to transport and operate.
Hardware RAID disks with integrated controllers
This type of product is the most widespread today in postproduction. The four main players in this transportable products market are LaCie, G-Technology , Promise and Accusys.
This type of RAID bay starts with two disks configurable in RAID 0 or RAID 1 to go up to 12 in RAID 5 or 6. Beyond 12 disks, you must go to rackable systems that require a server room or a dedicated room because of their noise level and their need for cooling. Only Accusys offers transportable and chainable systems.
These drives are designed to offer performance, capacity and reliability. A 6-disk system will provide approximately 980 MB / s read/write with a Thunderbolt 3 connection, double for a model with 12 disks.
Even if LaCie solutions offer, for example, very high and stable performance in sequential access, they can quickly find themselves faulted on random access. In this case, the G-Tech and Promise products (which share the same RAID controller) will be better in random and concurrent access but less good in sequential access. This difference makes it possible to target uses. We recommend the LaCie RAID solutions used for mounting and calibration (except workflows in image suites) and G-Tech / Promise in VFX or 3D where simultaneous access to a large quantity of small files is required.
It is now possible to get full SSD systems ( Shuttle SSD Pro from G-Tech) or full NVMe. There we have compact and very fast systems that allow read / write access up to 2800 MB / s but often of low capacity. The cost per TB remains high compared to mechanical systems.
There are also large rack storage units of 16 or more drives. They are sometimes found on very large calibration systems. Accusys is the most common manufacturer in this case because it is fast, reliable, yet affordable. These systems are also found in large SAN or NAS shared storage entities with manufacturers such as Infortrend or Accusys.
The time when a NAS was only a shared storage area dedicated to archiving or sharing administrative documents at the heart of an SME or an internal service of large structure is definitely over. The plethora on this market allows everyone to choose the product corresponding to their use, personal, administrative, video surveillance, post-production to cite just a few examples.
The goal is not to go into the details of a "healthy and fast network infrastructure" today, but it is obviously a prerequisite for any effective network sharing solution.
Today we distinguish three main categories of products:
- The "house" NAS
- Versatile NAS
- Dedicated NAS for a specific use
The "house" NAS
NASs called “home” often start with a basic need: to share a file over the network. It is from this observation that we will start to think about our needs and do with the means at hand.
The easiest way is to share a file from your PC or Mac and let others access it via the network devices tab. This use may suffice for most households. Very quickly the problem arises of the computer which is restarted while it was sharing a file or the management of rights (which has erased ....), the stability of the service and performance if you share video elements.
If you were on a Mac, the temptation was great (and legitimate) to download the late MacOS server and set up a network server “worthy of the name”. This solution, although having proven itself in different environments, was not of formidable stability and very quickly required more advanced computer skills. In addition, for various reasons inherent in Apple itself, the performance of the network drivers of these machines has never really been optimized, whether as a simple link or aggregate (not to mention the network cards of the PowerMac G5 of the time. which were running at half their theoretical speed ... or that of the iMac Pro and Mac mini 2018-20).
To this was added the need to secure its storage space with RAID or other solutions. File sharing from Mac OS could quickly become expensive, complex and unstable under heavy load. In addition, the rights management and the fragility of the HFS filesystem make them very fragile solutions.
However, we are aware of certain installations which run relatively well, including with recent equipment, because the people who installed them know their “whims” and watch them closely. What about the day the person is absent?
It is also very simple to share files from a Windows client but with many limitations. We will therefore quickly consider switching to Windows Server, which costs a lot more and is much more complex (and complete) than the late Mac OS server, for example. As soon as you want to manage rights and permissions for example, you will very quickly be tempted to set up an Active Directory service which will quickly become a gas plant if poorly managed/configured.
It will quickly be possible to set up a relatively stable and efficient sharing service, but putting the solution into production will have had a real cost if made by a professional or a human cost if deployed internally. The rights management and solution monitoring issues are relatively identical to those mentioned under Mac OS (Server or not).
However on a large scale, it is obvious that Windows Server is very accomplished and polymorphic but does it really meet this particular need to simply and quickly share files? Nothing is less sure.
It is extremely easy to install the FreeNAS platform based on FreeBSD, an Open Source platform. The installation generally goes well, it is relatively documented and the community very generous in advice and recommendations. However, you should not be afraid of getting your hands dirty and a good knowledge of Unix is pre-requisite so as not to quickly lose control of the system.
In addition, the Plug & Play side of this OS means that it will run on generic drivers and without default optimization, which can quickly make the performance level disappointing, especially if the hardware base is powerful (processors, RAID controllers. ...).
For twenty years for the first and sixteen for the second, Synology and QNAP have not stopped offering and improving network sharing solutions. (For reasons of reliability and quality of support, we will use Synology in our examples. Several unfortunate experiences with QNAP have not really convinced us).
The current offer ranges from a simple 2-disc consumer NAS to 72 or more discs. These solutions have a big advantage: they are based on an OS which will be common to the whole range and relatively simple to use. This will allow you to have a relatively large amount of software modules to add to this NAS, be it virtualization, streaming, DHCP server, Active Directory and many others ...
Simply these services consume processor resources and our NAS has a software RAID whose performance is partly linked precisely to the processor resource. Because your storage NAS must, because it can, host a virtual machine, FTP access, and a transcoding service? It is rather to avoid.
However, these solutions have a big flaw: they are based on an OS that will be common to the whole range and not necessarily optimized in terms of performance.
Explanations: In their case the RAID is software, the level of performance will depend on many factors such as the type of disks used, the type of processor, the amount of RAM of the NAS and its number of users.
These solutions are not dedicated to the audiovisual professions, they can work but we are diverting their use in a way.
Concretely, a Synology 12 disks with SATA 3.5 ”disks in RAID 6 will cap in the best of cases (empty volume and a user in test) at 700 or 800 MB / s in read / write monoflux. A DAS or NAS with a hardware RAID will output at least double.
In addition, simultaneous access to moderately heavy video resources will quickly collapse into multi-access. Concretely, in the case of a structure where 3 assemblers use the NAS, things will start to complicate from the moment when the 3rd user will start to work. The solution is not optimized for these trades, it does not apply priority with respect to flows and software RAID will quickly reach its limits.
If an operator has the misfortune to arrive and begins to upload his rushes to the NAS, therefore generates write requests where the editors need read performance, no one will be able to work properly. The main difference between SATA disks and SAS disks is that they cannot handle simultaneous read/write, so performance collapses when these types of requests are made intensively or continuously.
The solution found to partially overcome this problem will be to add to this Synology NAS a RAID SSD corresponding if possible to the amount of data that will be used during the working day.
The files will remain on this RAID SSD during the day and will be copied to the RAID HDD when the activity drops or the RAID SSD becomes full.
It is a fairly common solution now which will allow our structure with three editors to mount without loss of images as long as the amount of data to be copied by the cinematographer does not exceed the size of the cache. From the moment that it will need to download to the RAID HDD, the performance will drop drastically.
This remains a good solution to allow a NAS to be used in post production because the latencies due to mechanical RAID are reduced. The big defect in my opinion of this solution lies in two points:
If the RAID SSD fails, there is a risk of losing all the data present on it (in this case the working day ....) and the current caching technology works (only) in file mode. Concretely, you have a Resolve project with 24 hours of rushes. You climb this day 4 minutes extracted from these 24 hours of rushes. If you use 10 seconds inside a 40 minute rush, the entire rush will go back up in the SSD cache. Apply this rule to the entire project multiplied by 3 or 4 editors, you may end up very quickly with a saturated cache, and therefore lose all the benefit of this technical "artifice".
Associated services and added value
We would currently recommend using this type of NAS for nearline or archival storage purposes. If your infrastructure has internet access via fiber, they can also be used as a personal cloud for different workstations, a VPN server, a file sharing server ...
If this NAS is part of the enterprise range, it will possibly be able to manage (as troubleshooting) a video hosting service. As always, putting all your eggs or all your services in one basket is not recommended. High availability solutions (ie having two identical NAS so that one takes over in the event of failure of the other) exist but can quickly become expensive and complicated to maintain.
Dedicating a secondary NAS to a local archive, VPN server and file sharing is the most common. This NAS is often a replication of production storage, it will serve as "Disaster recovery" in case of failure of this one. In all cases, this NAS being often on site, it is recommended to copy its data outside, whether it is another NAS or a cloud space in the event of a major incident within the main premises.
Production servers (media)
The production server family is very large, reflecting the needs that can vary greatly from one structure to another.
Historically, collaborative storage in audiovisual production was based on SAN storage. In short, all client stations had physical access to storage through a Fiber Channel link. Metadata management to manage identification and access to the file went through a dedicated network. Other manufacturers offered an iSCSI connection through a dedicated network between the client and the storage.
All of these solutions had the advantage of being fast because we accessed files in block mode and thus bypassed the slowness of network sharing protocols which were not yet successful or fast enough. However, it was necessary to install software on the client workstation which had to bypass antivirus and firewall and which often required a dedicated license.
These infrastructures had two major problems: they were very expensive and could be very complicated to maintain in production. It was not uncommon in large structures to have a technician dedicated to the maintenance of the SAN. In addition, it was almost impossible to add storage without "breaking" the entire SAN and being out of production for a few days.
Another disadvantage of these solutions was that they often required that the client workstations be of the same OS, same version of the SAN manager. The slightest software or hardware change could quickly become a headache.
For these various reasons, some storage manufacturers became interested very early in production storage based on "generic" network protocols. The advantage is that these protocols are "universal" and without limitation of the number of connected stations other than the capacity of the network infrastructure to manage the traffic.
The Fiber Channel lasted relatively long because the investments had been substantial and the equipment durable. In addition, in many companies, the technical infrastructure of the network was not able to manage the bandwidth necessary for a production network.
Things started to change 4 or 5 years ago when 10Gb Ethernet switches started to become affordable. It was all the easier if your network cabling was recent in order to guarantee at least 650 MB / s between the client station and the server. This is more or less equivalent to the performance of the Fiber Channel at 8 Gb.
Consequently, we started to equip our customers with "hybrid" solutions where it was still possible to connect to Fiber Channel storage for the old machines and 10 Gb Ethernet for the most recent. The main drawback of this solution was that you always had to install software on client workstations to connect to the storage. These technologies being historically based on SAN technology, the problems of increasing storage capacity or bandwidth, server and client license management, in short their weak points remained the same.
These certainly proven and robust technological solutions are nevertheless beginning to mark their age. Most manufacturers get stuck in their approach and philosophy while sharing over the network using generic protocols can now work just as well or better.
The tariff democratization of networks in 10/25/40/50/100 Gb / s and the generalization of interfaces and switching has made it possible to switch to storage solutions based on the network only while possibly reusing fiber optic cables which had been installed for the Fiber Channel.
It is from this moment that the manufacturers who made the bet of all IP rather than block storage 10 years ago can demonstrate today the extent of their know-how and their advance in the matter.
We have now chosen to work with GB Labs, an English storage manufacturer which seems to us to best respond to the problems of our customers. Indeed, this company was historically one of the largest integrators of SAN infrastructures in the United Kingdom. They quickly realized that SAN solutions were sometimes oversized or unsuitable for the needs or functioning of their customers.
They quickly started to manufacture their own NAS and acquired a great know-how in terms of optimization of network hardware and protocols to manage to transport Audio Video streams as quickly as on a SAN. For ten years, they have thus developed expertise in optimizing protocols and network interfaces and switches to obtain the least latency possible while maintaining the transfer rate.
Once this optimization work had been done, they began to develop their own Operating System for their servers in order to be able to optimize the internal tasks as well as possible and set up tools to analyze the transported flows which will allow the server to adapt dynamically according to the type of requests and the number of users.
They realized that they had to provide their users (and their support) with tools allowing them to identify points of failure on a corporate network in order to prevent a cable inadvertently connected in the wrong place by a non-operator. warned do not jeopardize the stability of the whole.
What they did.
IDA - VRE - NITRO
The base was there. They had dedicated and optimized servers for our businesses, network optimization capacity without software overlay.
They then naturally worked on the optimization of the RAID controllers of their servers to extract the quintessence and were empty realized that the mechanical RAID had their limits.
Very early on, they worked on a block caching technology and not a file caching based on internal server analysis tools. Thus, when the server finds that a user uses a data block 7 times within a defined time, it will copy it to its SSD cache. Since it only copies the blocks used and not the entire source file, this copy is transparent and instantaneous. This will allow a multitude of operators to access the data used with almost zero latency. Concretely, an editor works on average on three or four minutes of assembly daily with maybe 60 minutes of rushes used. He will have privileged access to his data, even if there are 24 hours of rushes in his project because only the fragments used will be copied in the cache.
In terms of comparison, the Synology mentioned earlier would have copied all of the rushes opened in the project to its cache, i.e. 24 hours. It is useless, it takes up space on the cache and harms the overall performance of the system.
This advanced caching technology has made it possible to set up another functionality to my knowledge that is almost unique in this market, which is prioritizing the type of flow.
Anyone who has already used collaborative storage has found that it is often impossible to guarantee continuous writing (video capture for example) and stable simultaneous reading (editing) in the event that they are used on the same physical volume. Either task was disrupted by dropped frames, which can be critical during live stream acquisition.
GB Labs knows how to manage priorities by type of flow in order to allow the editors to work without being disturbed by the flows captured by other production facilities and vice versa. Thus, the data is captured on the same server and mounted in stride where often it must be moved from the ingest disk to the working disk.
In the same logic, it can be useful especially on large installations to manage the bandwidth priorities for certain stations or users. GB Labs solutions allow you to do this in two ways:
The first (present on all their systems) allows you to decide to put a bandwidth quota per machine or group of machines (IP range). We know for example that such a mounting station even if connected in 10 Gb will only need a maximum of 250 MB / s in read / write, but that the operator has the annoying habit of launching large copies possibly penalizing and useless from his post. This way we can choose to restrain him to be sure that the others are not embarrassed.
The second solution is the dynamic allocation of bandwidth. It is only present on the largest systems because it relies on the "analytics center" present on the dual-processor servers which use a dedicated processor for network sharing, the other for the analytics center and automation tasks. This solution allows dynamic allocation of bandwidth per user or per machine. Concretely, we will define priority rules by importance and the server will apply them according to the server usage rate. In practice, if user A has high priority, B who is not a priority will not be "limited" until A is present.
It is possible on all systems to set up automated replication tasks based on hourly criteria. The servers will be able to recover from a cloud space, another NAS or workspace, an FTP, files and copy / move them to a predefined location. We can for example schedule a copy or download of the rushes overnight and we will ask the server to copy them to the ingest workspace of a particular project.
The same is possible the other way around. It is possible to ask the server to replicate this or that folder on a cloud at 10 p.m. for example. It is quite possible to ask the server to send you an email to tell you if the task was successful or not like any other operation performed by the server.
The automation system present on large systems is even more advanced. It incorporates the principles of replication but is also capable of taking snapshots of one or more directories. We could assimilate this to a time-machine on mac.
For example, we will decide that the directory containing the Adobe Premiere projects will be taken in snapshot every ten minutes. This may allow in case of false manipulation to return to the project as it was at 10:30 am this morning. It is obvious that in principle a snapshot is incremental. We will therefore keep in mind that the space dedicated to snapshots must be much larger than the source.
The automation engine can do whatever it is asked for in terms of file management. For example, we can very well establish a rule or it will scan RED rushes located on another NAS and copy only the proxies files in .mov in another workspace while recreating the original tree structure to facilitate relink thereafter. This is just a basic example.
It is a workflow building tool integrated into the server that can save a lot of human time on thankless and time-consuming tasks.
All new GB Labs storage systems include Mosaic as standard. It is a graphical interface allowing you to display, classify, sort the media present in your storage. Without being a real MAM, it allows users to find, classify their elements and add metadata fields to them which can be exported to a MAM if necessary. It is possible to interface it with Google AI so that the different objects in the image are identified and marked.
Once the research is done, you can select them, add them to a basket and finally integrate them into your editing project. Different collaboration and annotation tools are present to make it an intuitive and collaborative visualization tool. Mosaic is constantly evolving, do not hesitate to ask us for a demonstration.
It is the last brick added to the GB Labs feature panel. It is a high availability fail-over system. Concretely, you must have two systems that are replicated perpetually, ideally located in two different places.
One system is defined as primary, the other as secondary. The user accesses the servers via a virtual IP address pointing to the two servers. If the primary system encounters a problem, the secondary system will take over immediately and without appreciable interruption for the users. The system administrator will be notified by email and asked to resolve the problem on the primary server. When the primary system becomes available again, the data is synchronized and the primary server resumes its role.
Guarantee, support and commercial policy
All GB Labs products have a standard 1 year hardware warranty. As a partner, Atreid has priority access to GB Labs support.
However, we recommend upon purchasing your GB Labs equipment to sign a software and hardware support contract for the life of the product. It guarantees you serenity and speed of response in the event of a problem.
In addition, the GBLabs support contract, relatively inexpensive compared to the competition, allows you to benefit from software improvements if your system supports it.
Note that power supplies and hard drives can be guaranteed for early replacement if you purchase the Care Pack .
If Atreid is the manager of your infrastructure, the GB Labs support contract will be integrated into a more global offer (contact us about this).
Tiger Technology and the old systems?
As many of our customers know, Atreid and Tiger Technology have been and continue to be partners for many years. However, changes in pricing policy and a very cloud-oriented orientation to the detriment of other developments have prompted us to look for other systems capable of meeting the needs of our customers.
The reflection and testing phases took almost two years. We have studied the different solutions on the market while learning from past lessons.
One of the biggest criticisms of Tiger Technology systems and other manufacturers concerns the installation of client software on all user workstations.
This is why we quickly turned to manufacturers who offered generic connection protocols. This is where we realized that not everyone offered the same thing at all, even when relying on the same protocols.
Many manufacturers offer solutions that could be adapted to post-production but which quickly find themselves faulty due to the absence of a well-managed cache or serious optimization of the equipment and adapted to post production.
GB Labs interested us because their approach is focused on your trades and current uses and not on raw performance on one or two flows. The fact that they have long installed their products themselves and their past as a former integrator gives us a common experience and a good knowledge of the problems of our customers.
In addition, once put into production with test customers, they initially stuck on specific hardware (number of disks, RAID controller) quickly realized that the solution was doing more than their Job against any solution equivalent deployed under the same conditions.
It is obvious that a customer who has invested substantial sums in a Fiber Channel solution for example recently will not want (and will not need to replace everything).
A key point in the use of your storage and the categorization and lifecycle of your data. It can be tempting to keep everything in high availability on your fastest storage, you will quickly realize that a high percentage of this data is little or not used and does not need high availability.
Our approach for these uses will be to reuse the infrastructure and storage facilities that are still performing and to connect them to a product called EchoBridge. This through SAS or Fiber Channel interfaces will transform this storage into Nearline accessible by the network. Depending on the scenario, we will probably consider adding a fully SSD storage unit to it which will serve as a daily cache for the data in production. The comings and goings between the cache and the Nearline will be automatic but can also be triggered manually if necessary. In the case of a Tiger Box user, this will be used as Nearline via an SMB share to no longer have to use Tiger support and the associated client.
In certain cases, Tiger Technology solutions can nevertheless remain the most adequate and we will continue to offer them if that seems to be the case. We will of course continue to maintain them and provide you with the necessary assistance throughout the life of the product.
© Atreid - May 2020 - Full article available here