A Day at the NetApp EBC in Amsterdam

Happy Easter to everybody ūüôā Last week was a very good week for myself particularly with all the travelling between Amsterdam – Geneva – Taize (France) – London. However one of the main highlights of my week was the 2 day visit to the NetApp EBC – In two words – Simply Amazing

I was lucky to be part of the team to go to the NetApp EBC with some of the top sales and Technical folks from one of our resellers (Insight direct) and got a chance in person to be part of the EBC experience. From the moment you step in the building, you are well taken care of by a fantastic team who had arranged everything for the next couple of days . Particularly worth noting was the large screen with all NetApp partners,distributors and customers

There was various presentations covered by Grant Caley (NetApp CTO), Clemens Siebler (Senior Architect/TME , EPG), Flashman (Laurence James) amongst various other brilliant speakers. The NetApp view point into the future of technology and various discussion on current technology topics including Containers, Hyper-Converged was very interesting. Some of the highlights (IMO) include the below

  1. Flexpod Infrastructure Automation – A technology build by NetApp along with Cisco using UCS Director to simplify process for deployment of entire Flexpod stack including NetApp controllers,Cisco 9K switches and servers with ESXi . The new deployment model will reduce time to deploy to about 1hr!!! – This means you get the ease of deployment (something that most people like about Hyper-converged solutions!) and the best of breed technology from a Infrastructure perspective – Below is a link to a 5 minute demo of the same ¬†–¬†https://www.youtube.com/watch?v=50l0QxI-tG0
  2. Flash and Trash РPrimarily this will mean the usage of All Flash for all Primary workloads and Object storage (Trash) for Archives and Backup on Cloud (Amazon S3) or Private Cloud РThere is a few interesting videos on NetApp and Storage Field Day 9 (SFD9) which does touch upon what NetApp is looking to do in this space http://techfieldday.com/appearance/netapp-presents-at-storage-field-day-9/
  3. Cloud ONTAP – There are some interesting enhancements coming to Cloud ONTAP specially Azure – watch this space out – in between if interested check this video out as well –¬†Video on using Cloud Manager API’s for Cloud ONTAP

As I end this post, all I want to say is thanks to Lucero AlonsoCanero and Craig Menzies for hosting us on the day – EBC will hold a special place in the heart for sure ūüôā

Altavault – Helping move your data to the Cloud in a Cost effective fashion

Over the last year, I’ve had many conversations with customers who are looking to move their data to the Cloud.Obviously for each customer the type of data varies based on a number of factors including applications being used,business model, geographical location of DC / Branches etc. However primary concerns seems to be around getting the data to the cloud in an easy cost effective fashion and the degree of ¬†control over their data once its in the cloud including moving data out when required – Kind of reminds me of Google’s driverless cars – helps people get from place to place but someone else seems to be control . In this post we’ll focus on how to get the data to the cloud using Altavault technology in the most cost effective fashion . I will cover how NetApp helps customers maintain control over their data in a later post

Alta stands for ‘high’ in spanish and Vault ¬†means jump over or leap over – The technology acts as a cloud gateway which can de-dupe,compress,and encrypt the data before its send to the cloud vendor of choice.The data could be residing on servers(DAS) or external storage and using a backup software ,we can move data to the Altavault platform. A CIFS share or NFS export can be created as a target on the Altavault.One of the key elements to remember is that as the Altavault is Cloud Gateway, it needs to be setup with Cloud credentials as a target ie you cannot operate¬†this without connecting¬†to Cloud

altavault_how_it_works1

There are different ways in which Altavault is sold today which includes

  1. Physical Appliance – AVA400 and AVA800
  2. Virtual Appliances – AVA -v8, AVA-v16, AVA-v32
  3. Cloud Appliances – AVA-c4, AVA-c8, AVA-c16

 

The physical appliances are highly scalable and can scale from 48TB(12x4TB) to 576TB (96x6TB) and maximum ingest performance can range from 6TB/hr to 9.2TB/hr depending on physical platform – They have 10Gb and 1Gb ports that can be used

The virtual appliances are quite interesting as well as they can be deployed quite easily in ESXi or Hyper-V with local usable capacities of 8TB, 16TB and 32TB depending on the model

From a Cloud appliances perspective it is important to note that AVA-c4 is the only model available in both AWS(Amazon Web services) and Azure(Microsoft Cloud).AVA-c8 and AVA-c16 can be deployed in AWS only .

My perspective on the technology

The wide range of compatibility with backup software vendors and Public & Private Clouds is quite cool as a wide range of vendors are supported.The fact that we can backup from ‘any platform’ (doesn’t have to be a NetApp platform) to the Altavault and to any Cloud provides a lot of flexibility and choice for a customer.Another interesting aspect about Altavault is the data reduction technologies, which include Inline de-duplication and Inline compression, along with encryption capabilities ensures reduction of footprint in cloud i.e. actual capacity used in Cloud (which does affect your monthly bill from the Cloud vendor!) and alleviates security concerns around data in transit to cloud

I will admit it would have been nice to have the option to run the Altavault device as a standalone disk backup target without having to hook it up with cloud services – However taking into consideration the rate at which cloud is being adopted, it makes perfect sense

Thanks for reading !

Building the Next Generation Datacenter with NetApp and Solidfire

Last week was a remarkable week in Storage technology with NetApp completing the acquisition of Solidfire!

Ever since NetApp publicly announced their intention to acquire Solidfire in December, there has been a lot of talk about what will NetApp¬†do with Solidfire, how will NetApp position their Flash portfolio, the future of ONTAP (yeah, this is a weird one for anyone who has taken a proper look at NetApp’s Data Fabric!) amongst several others.

There are a lot of excellent blogs that shed light on some of these questions with the below two being ones that I liked the most

  1. https://thedatamachine.wordpress.com/2016/02/03/netapp-solidfire-or-solidfire-netapp/
  2. http://blog.edmorgan.info/storage/2016/02/03/SolidFire-NetApp/

My perspective:

Architecture is fundamentally an important aspect for any technology and having/building the right architecture depends to a large extent¬†on understanding the future needs of the market. SolidFire’s highly flexible architecture provides customers with capability to scale non disruptively to 100 nodes taking advantage of white-box economics and at the same time providing consistent guaranteed performance to multiple tenants in a secure multi-tenant environment. Add storage efficiency technologies into the mix, we have a very strong proposition. SolidFire systems also have some other very interesting integrations which include (but are not limited to) :

  1. Openstack : Provides excellent Cinder integration and can be deployed very quickly. Fully compatible with Swift object storage for integrated backup and restore
  2. Automation : Full automated provisioning including integration with configuration management tools like Puppet (http://developer.solidfire.com/forum/puppet/116-puppet-provider-for-solidfire-clusters)
  3. Docker Plug-in : Plug-in for Docker containers to create persistent storage while providing performance guarantees for each data volume
  4. REST API Integration – Robust native REST based API integration

With this extremely astute acquisition, NetApp are embracing themselves and helping their customers prepare¬†for the future. SolidFire’s strong play in Cloud like Services and third platform service is a perfect fit with NetApp’s own strategy around these services and possibly could be a key part of this amazing fabric that NetApp are building together giving customers even more choice, control and flexibility

Personally, I am very excited and am looking forward to learning loads more about SolidFire in the coming weeks!

Starting off…

Hey everybody,

My name is Bino Thomas and I am a SE working for a Distributor ,Hammer Plc, based in Basingstoke England. I’ve completed my Masters in Engineering management ,Coventry University and Bachelors in Electronics Engineering in India and enjoy all things Technology

Idea behind the blog is to post about events / recent happening’s in the technology world and to share my perspective about the same . I love football(Man United fan!),Chicken Biriyani (that my mother cooks) and am a huge ONTAP fan boy as well

Believe that technology helps to solve complex problems( not world hunger !) but its even more important to understand how to use this technology to get the best out of it

So hope everybody enjoys the posts coming up!