How Dark is your data?

Dark_Data_World

How much data do you have? Do you know where and what it is? Do you know, who can and who actually access that data? Analyse your data and discover how much of it is Dark Data!

Gartner recently released an analysis of unstructured corporate data dividing data into 3 categories:

  • Mission Critical Data: Vital data with actual business value
  • Redundant Data: Redundant, Obsolete or Trivial data (ROT data) with little or no business value.
  • Dark Data: Uncategorized data, data nobody really knows exist or which value to assign.

Dark Data Overview

Dark Data in Denmark?

Veritas made their own survey for EMEA and disclosed the numbers specifically for Denmark in a separate report.

Although the percentage of Dark Data was lower than the average in Denmark, the amount of ROT data was high. Something often contributed to Data Hoarding, Veritas explains it in detail in the report, but in short: We just don´t like deleting data and we don´t care about cleaning it up… We just buy more storage, which in turn costs more and more to manage.

Dark Data Iceberg

Proact Numbers

Based on our own customer assessments performed during the course of 2015, we have found that approx. 50-65% of our customers’ file data is considered “stale”, as in “nobody has accessed the data for 1-2 years”. That´s 50-65% of our customers’ data – just lying around wasting logical and physical space on expensive storage, power, backup resources, often replicated in multiple copies and migrated repeatedly to new storage platforms, because no one has the insight to classify it and clean it up. Often lots of data was found to be orphaned – with either no owner or unknown owners, other times lots of data simply had the “wrong” owner.
We also found that often much data was accessible by far more employees than necessary; leaving an open door to what could be some of their most valuable assets.

So what´s your next step?

“Just migrate your data to the cloud”, that seems to be the suggestion for any problem these days … Migrating unstructured data 1:1 to the cloud solves absolutely nothing, except helping your cloud providers reach their budget. That’s what we have been doing for years between storage arrays.

Instead follow three simple steps:

  1. Classify your data and evaluate the magnitude of the problem – this is what we call a Dark Data Assessment. A pre-defined service that involves a simple insight to your file servers. The Veritas Data Insight tool will automatically scan and monitor your environment for a few weeks and then we create a detailed report. In addition to Dark Data the report gives an overview of which data is used/accessed, data that shouldn´t be there in the first place (movies, music, backups, etc.), orphaned data and possible identified security risks.
  2. Get rid of Redundant and Obsolete data or assign correct owners.
    Get help locating duplicates, stale or orphaned data and assistance with either removing, archiving, apply correct ownership or simply assign “custodians” (data owners, responsible for the data and it´s safety).
  3. Improve on corporate culture and data handling processes.

Yes, that last one could be a pain point, and it is going to take some time. Optionally let Veritas Data Insight assist with continuously tracking new arriving data and automatically reporting on step #1 and #2, and/or use the “Data Custodian” feature to assign responsible data owners and handle access control.

Anything else?

On top of all this, Data Insight allows you to audit and track all data usage, in case something needs to be investigated or documented, something that is becoming increasingly important – especially after data theft. It also continuously scans for data, which is not secured properly, and attempts to track “unusual” use patters, like users suddenly accessing data outside their normal use pattern – what we call “Edward Snowden activity” :-)

 

Sources:

Databerg EMEA: http://imresourcecentre.com/en/articles/the-databerg-report-identify-the-value-risk-and-cost-of-your-data/?category=Insight
Databjerg Danmark: http://imresourcecentre.com/media/1333/90648847_strike_exec_summary_dk.pdf (In Danish)
Gartner: https://www.gartner.com/doc/3077117/organizations-need-tackle-challenges-curb

NetBackup 7.7.1 is out

Veritas

So 7.7.1 was out yesterday, and having tested some of the new features in the Beta program already, I can say that some of them are quite interesting.
Download: fileconnect.symantec.com (and use your 7.7 serial number.)
Documentation: support.symantec.com/en_US/article.DOC8623.html

  • NDMP Accelerator, basically it´s the same cool Accelerator (incremental forever) feature we already know, but just for NDMP devices. Currently only supported on NetApp, and target must as always for Accelerator be MSDP (or supported OST devices) for this to work – which might be a showstopper for some.
    Initial tests shows that it is real easy to setup, and very effective.
  • VMware Intelligent Policies can now query based on VMware tags. Be aware that currently the tag information is not backed up (and as a result not restored either) along with the VM right now. This feature is targeted for next release.
  • System State backups now support incremental backups. You may never have noticed this, but when running an incremental agent based backup of a server which includes System State, the System State part was always backed up as full (System State is extracted via an API in the OS, because these files are not all accessible via traditional backup. This will decrease time for incremental backups in general, and we expect it will save good space on your MSDP´s, because System State dedupes less than OS backups due to the nature of their composition always being different each time.
  • RedHat will now support 96TB MSDP pools like SUSE has done for some time, explanation in the Appliance section.
  • MS SQL Always On support is something many have been waiting for, currently only for scripted backups and the new (cool!) SQL Intelligent Policy. This is because the SQL IP does not yet support cluster setups/Virtual IP´s, but it is on the roadmap!
  • Full Exchange 2016 support (including GRT).
  • The Oracle Intelligent Policy will support backup to Database Shares….. ehh Database Shares??? See under the Appliance section.

The (for some dreaded) JAVA GUI JAVA_GUI

If you already upgraded or read my previous blog post you will know that NBU is moving to the JAVA GUI only. That has generated quite a few discussions and complaints. Personally I always preferred the Windows GUI despite of working in UNIX environments :-) so I was initially a bit ambivalent about this myself.

All I can say is, “it´s not that hard to get used to”, and yes there are still a lacking features in the JAVA GUI compared to the Windows GUI, but those will soon be eliminated, i have already tested the next version of the JAVA GUI. Also something much more exciting, I have seen previews of some of the new features coming to the GUI. Cannot disclose anything yet…. but I think it looks cool!

Appliances (and 2.7.1)

NetBackup Appliances

On the appliance side we have to wait a bit for 2.7.1 (approx. 1 month is the expectation right now), and some of you may have already heard the reason. The underlying OS is changing from SUSE Linux to RedHat, nothing customers has to worry about because everything is hidden in behind the web and CLI interface, but QA is more thorough this time, ergo we wait a bit more.

When it is released, we are expecting (no promises):

  • Support for 4 disk shelves on the new 53xx appliance, bringing it close to ½ PB of usable de-duplication storage in just 18U rack space! Powerful media server included…
  • Database Shares (nicknamed CoPilot) feature.

Database Shares is a new way of backing up Oracle, allowing Oracle DBA´s to send their database backups to a share on appliances via NFS (traditional disk dump), but maintaining the knowledge about the backups in NetBackup, and still allowing NetBackup to perform secondary operations on the backups (like duplications) without affecting the production system. To some DBA´s this will feel like getting the best of both worlds, to some Backup Admins it will seem a bit silly compared to just using the API…. but who am I to argue religion :-)

It is my understanding that Database Shares will work with both Intelligent Policy and traditional RMAN scripting.

NetBackup 7.7GA is out…

Veritas

I finished my testing several weeks ago, but been quite busy getting ready for my summer vacation. I guess those negotiations with Veritas, about delaying 7.7GA ’till after my vacation didn´t go well :-), so here it is.

For those of you searching for the 2.7 version for the Symantec appliances – hold your breath, it´s not coming out until 2.7.1 (it appears they might be dropping the 4th digit in versions again, but let´s see what happens). It´s not clear why it´s being delayed, some good reasons could be the planned OS change underneath the hood or perhaps the support for the new 54xx series. I don´t think it´s because they don´t trust the GA release, because it has proven to be very stable, that being said, Proact usually still try to hold back major releases until the first patch is release – like we do with most other software releases.

As expected I did not have time to test every little aspect of 7.7 but I got around to a lot of the important stuff, and spoke to other beta testers about some of the rest.

SQL Intelligent Policy (SIP)
It´s here and I think it´s pretty great. Let’s clear out the most important thing, the old scripted way works just as always, so you can migrate whenever you feel like it at your own pace.

So how does it work? Well very similar to the Oracle Intelligent Policy, you install the client on your SQL servers, the client presents the SQL server instances that it detects (with a little delay) to the master server in the new Applications view:
SQL0

So several annoying things about the previous SQL agent has been addressed. Let’s start with authentication. After the agent presents the SQL instances to the interface, you can either group them together or register each of them individually. During registration, you have the choice of either using the legacy authentication (setting a SQL service account on the NetBackup client service on each SQL server) or provide authentication from the NetBackup interface (this can also be done remotely by a DBA so NBU admins doesn´t know the login).
SQL1

Another annoying thing was generating and distributing BCH scripts to all the SQL servers, no more of that, now you just browse your registered SQL instances and select the ones you want in your policy:
SQL2

Select what “kind” of backup you want, both “ALL/WHOLE_DATABASE” and browse to individual File Groups, Files or Databases:
SQL3and log backups doesn´t require individual exclude for Databases in Simple Mode now, so no more of those annoying status 1 backups.

Nothing is perfect though, a few things a missing from the first version of this, some will obviously prove to be an issue to some environments:

  • No support for clustered MS-SQL servers or Always On. Symantec has ensured that this is the highest priority on the SIP roadmap, and I think we might see something already in 7.7.0.1.
  • Currently no support for excluding individual databases, like the old “DATABASE $ALL + EXCLUDE XYZ_DB”. This is also on the roadmap.
  • Alternate restores still requires you to manually edit BCH files, I get the impression that a makeover is in the works for the SQL client side GUI, but nothing specific.
    I am really hoping this old client side GUI soon goes away for something better :-)

Hyper-V Intelligent Policy (HIP)
Most of you probably already knows the VMware Intelligent Policy, well now Hyper-V has it (for 2012R2). No more manual selection of VMs, you can now create queries for Hyper-V backups:
hyperv0

Resource Limits option is also there to help distribute the backup load automatically across the Hyper-V cluster(s). There is also integration to Microsoft SCVMM (which I have no experience with).

What missing?

  • A few more query options fields would be nice, like VHD location (datastore).
  • Resource limits is primarily limited to snapshot operations.
  • SMB3 support still not there – IF this is an issue to you, please let me know because Symantec is not aware of whether this is a big issue or not!

VMware
I didn´t have the chance to install the new Instant Recovery plugin to vSphere (I am sharing some of my test environment with others, so this will have to wait for GA), but supposedly it works great. Just select a VM, request an Instant Recovery and few seconds later its available in VMware using the backup storage.
This has been included since 7.6 so if you are not already using it, what´s your excuse?

So what about VMware tags, this replacement for annotations, which many use to automate VM selection, has been out there for years now. Those of you which have migrated fully to the Web Client, must be missing this as much as me.
Well the thing is, that VMware didn´t include tags support in the VADP (backup API) until 6.0, and since the information isn´t stored with the VM anymore, but in the SSO database, there were some theoretical scenarios that needed to be dealt with, like what happens if you restore a VM whose tags have since the backup been removed from the vSphere environment. Recreate it (much to the annoyance of the VMware admin) or skip it (but then the VM is not as it was when backed up) – and more.
There was a vote amongst larger customers and partners, most people are rooting for a partial implementation process, where step one is the ability to search them out in the intelligent policy, and step 2 is backing up and restoring tags information with the VM. I am hoping to see step 1 in 7.7.1 at the very latest.

Auditing and Security
This is something that more and more people worry about. Focus was on two things in 7.7GA release:

  • NBU has suffered from quite a few “World Writeable Files”, i.e. files which any user can read on the client/servers. Especially log and config files can contain sensitive information, and many of these have been fixed now. Focus is on removing the last for the 7.8 release.
  • In 7.5 a default audit function arrived which few people know about, but it didn´t always log the login-user in multi-user environments, just administrator/root, this has been partially updated, more to come.
    Refer to the NetBackup Security and Encryption Guide on how to enable this new granularity.

Next focus area will be on improving the audit function further and limiting the amount of operations which require administrator/root access, so junior admins can work with NetBackup without having these access rights – something which is almost already there on the Appliance today.

PS, this was the area I didn´t have time to test myself…

JAVA GUI – the one and only!
Not sure there is much to say. The Windows GUI is gone… For those of us used to working with both the Windows and JAVA GUI, it´s not really a big thing :-) – actually quite an improvement in the environments using the JAVA GUI, finally some of the most annoying lacks of the GUI has been (or will be) fixed. Personally I reported in a lot of small things back during beta testing, which the product team has agreed to fix. All things which will make the transition easier for you guys using the Windows GUI.

An important improvement is the removed requirement to do apply before changing policy tabs, you can now go up/down in the detailed view of the activity monitor and Filter “By Example” has been added. Check out the “User interface enhancements” of the release notes a full list of updates to the JAVA GUI – https://support.symantec.com/en_US/article.DOC8512.html

What I have reported back as the most important features still lacking is:

  • Quick jumping by typing “letters” is not working (like when browsing large policy/client lists)
  • Filter previous results in the Activity monitor is missing from the “By Example” view.

So to all the Windows admins out there, wipe away those tears and accept that the world moves on :-) it´s really not that bad. If you disagree, don´t kill the messenger…

What else?

Didn´t get around to play with NetApp cDot support (but obviously it should just work), but I did look into the new S3 cloud support. This means that any S3 compatible cloud storage provider can get certified and sell you storage which plugs directly into NetBackup (Google, Amazon and others are doing this, also in Europe). The things is, they are only selling you storage, so all data that travels to the cloud is un-deduplicated, unless you put a media server into the same cloud as well, and then I can suggest better solutions J Check out my previous 7.7 beta post.

What to be aware of

  • NetBackup 7.7 software does NOT support Windows Server 2003(R2)!
    If your master/media is on this platform and you want to upgrade, it might be the right time to migrate.
    For clients just keep them at the latest 7.6 client and everything works as always, fully supported.
  • Legacy log file names have changed, remember to update your potential custom scripts if you parse them.
  • The NetBackup Search “product” is being decommissioned, it simply wasn´t used. Some features are being migrated to NetBackup though (hold features).
  • If you have appliances in your environment, make sure you follow supported upgrade paths, or call us first!

Have a great summer out there!

 

 

 

 

Highly Available + SSD optimized – and cheap!

Veritas

In this blog, I will show how two features in Veritas Storage Foundation can help you make an application highly available and faster without obtaining expensive hardware in the process.

One of our customers wanted to cluster their NetBackup Master server, to ensure that they would be able to initiate restores immediately in case they lost a data centre. They had the following requirements:

  • Existing production storage infrastructure was not not be used.
    This is our recommended practice when protecting the backup infrastructure, how else how would you start recovering your production servers if you lost both them and your backup server at the same time?
  • Improve I/O performance on the application (in this case the NetBackup database/catalog).
  • Keep cost low (a fairly common request :-) )

Luckily, the solution is quite simple. With the introduction of Flexible Storage Sharing (Software Defined Storage) and SmartIO (SSD/Flash optimization) in Storage Foundation 6.1 for Linux back in 2013, we now have the option to provide all of this on simple commodity servers.

What do we need?
2 servers for the cluster configured with:

  • CPU and Memory sized as required by the application
  • Internal disks with adequate space for the application
  • 2 x 10GBit network interface for the interconnect between the nodes (or InfiniBand for even higher performance).
    They can be direct-cabled between the nodes to save infrastructure cost or separate them from existing infrastructure.
  • 1 or more network interfaces for the public network
  • Optional Internal SSD or Flashcard for I/O optimization

Illustation of FSS cluster

In addition, the cluster needs a Cluster Coordination Point Server (CPS), which the nodes can contact via the network in case of a split-brain situation (loss of network connectivity between the cluster nodes). The CPS then gets to decide who owns the application.

The CPS is included in the product and can be installed on existing server(s) in the environment, which does not have to be dedicated to the task.

How does it work?
The cluster shares the internal disks between the two nodes via the interconnect, so each node see disks as if they were local. When creating file systems, Storage Foundation will automatically mirror them between each node. In case one of them breaks down the other mirror copy is still active, meanwhile in the background, Storage Foundation tracks all writes, so once the failed node comes back up, it only needs to synchronize the most recent changes to the other node

What about performance penalties?
Writes will obviously have to be committed on both cluster nodes before the write is safe, which is why we recommend only 10GBit or InfiniBand solutions for this kind of setup.

Writes going directly to an internal disk will always be faster than having to commit 2 writes, of which one has to travel over the interconnect. However, consider the traditional cluster setup…..in which a write has to travel the SAN and all components of the Storage System as well, same same… and I know which is the cheaper of the two options!

This is also, where the option of enabling SmartIO comes into play, SmartIO allows you to utilize local SSD or Flashcard for read and write-back optimization without the penalty of any external resources:

SmartIO

SmartIO works on both standalone or clustered setups (write-back in clusters is currently limited to 2 node clusters). Please refer to a previous Proact blog entry to see some examples of how SmartIO increased performance in an Oracle setup: http://blog.proact.dk/accelerating-oracle-symantec-storage-foundation

Q&A:
Q: Could this work in a virtual environment as well?
A: Yes, and Symantec even provides a plugin to VMware which allows the nodes to interact with VMware, contact us for more information here.

Q: What if I wanted to do this over long distance?
A: Still possible, although we would be using Veritas Volume Replication (VVR) instead of FSS, which allows for synchronous or asynchronous replication of data in a consistent manor between servers using standard TCP/IP, theoretically over unlimited distances!

Q: Do I really need a full cluster setup to protect my NetBackup Master server / application?
A: No, there are also simpler ways to protect your application, but it will require a more in-depth discussion of the environment and RTO/RPO requirements.

Want to know more?
If you have a minute, spend it smiling at this Americanized cartoon-description of FSS – have fun :-) – https://www.youtube.com/watch?v=HJX4DDDFvqg

SmartIO White Paper:
https://www.symantec.com/content/en/us/enterprise/white_papers/b-storagefoundation6-1-whitepaper-21327403.pdf

Software-defined Storage at the Speed of Flash:
https://www-secure.symantec.com/connect/sites/default/files/21344088_GA_WP-Intel-and-Symantec-Software-defined-storage-at-the-speed-of-Flash-0115.pdf

Geek mode on:
https://www-secure.symantec.com/connect/videos/flexible-storage-sharing-introduction-and-demo-0

NetBackup 7.7 is on its way…

Time to blog about the near future of NetBackup :-)

Veritas

NetBackup 7.7 is out in beta and as always, we are participating in the beta testing program. Within a few weeks I will test some of the hot new features I know many of you have been waiting for and blog about it here. Just to give you a taste of what´s coming, I will sum up the expected (“expected” because this is beta! – so if things don´t pan out in the beta, they retain the option of removing features from the GA product) features of the new version:

  • SQL Intelligent Policy (SIP) – Yes it´s finally here – a new MS SQL backup method. Not that SQL backup’s haven´t worked for at very long time, and not that it wasn´t stable, easy and fast – it just never felt “integrated” the same way as the other agents.
    I know Veritas has been working for a long time on this, wanting to make it right the first time, so let’s hope for the best :-)
  • Hyper-V Intelligent Policy (HIP) – Yes, it´s HIP to use NetBackup, now more than ever.
    NetBackup caught up with their features for VMware backup. Support for Hyper-V was there since the beginning, but it was never as cool as the VMware Intelligent Policy, allowing you to select which VMs to backup in a 1000 ways (or at least close to).
    This includes integration to Microsoft SCVMM.
  • VMware Intelligent Policy (VIP) does get a few new cool features as well, Instant Recovery (the feature of starting your backed up Virtual Machines instantly from the backup storage without doing an actual restore) now gets integrated with the VMware web client plugin, so VMware administrators can start/test VMs without assistance from the backup team.
    Still not known to many users, VMware vCenter integration has been a (free) part of the NetBackup suite for years, including web client since 7.6.1 so nothing new there.
    PS, yes vSphere 6 support is there and has been since the day before VMware launched vSphere 6, be aware though of a few bugs in VMware´s backup API, so contact us before you consider upgrading to vSphere 6.
  • Improved Auditing – The “Who-did-what-and-when” feature just got more detailed.

OK and perhaps one not so many of you have been waiting for:

  • Improved JAVA GUI – Yes, it´s true, Veritas is focusing on a single GUI instead of two almost identical GUI´s. This means that as of 7.7, the Windows GUI is supposed to be phased out, and hard work has gone into ensuring that all features lacking from the JAVA GUI has been implemented – but did they make it, let´s find out…
    The obvious question is why? Well development hours can be spend better than maintaining 2 GUI´s, more and more are moving towards NetBackup appliances because it’s much easier to maintain an isolated backup environment on dedicated pre-configured hardware, and the JAVA GUI suits that direction better. It also provides a more secure approach to user handling, which is also the reason that many customers actually have been using the JAVA GUI for some time even for Windows environments.

Finally, additional support is expected for the following products:

  • NetApp cDOT support for both NDMP and Replication Director
  • Support for S3 Compatible Cloud providers Google Nearline, Amazon PUBLIC, Amazon Gov, and Hitachi. But who want´s that(?) when you can send your backups to the local Proact Datacenter with full integration to NetBackup.
  • SAP HANA on RedHat

Keep tracking the Proact´s blog for real life experiences with the new Beta right here…

NSX – Hvad er nu det?

I det sidste år er der ingen tvivl om at hvis man arbejdet med VMware, eller bare har en lille smule berøring med VMware, så er man blevet udsat for begrebet Software Defineret Data Center, også kaldet SDDC.

I Proact har vi også kastet os over SDDC og, udover at have opbygget et særdeles tungt team inden for SDDC, har vi rent faktisk også bygget et miljø til hosting af vores kunders data, i daglig tale kaldet PHC – Proact Hybrid Cloud.

PHC er på software siden bygget 100% på produkter fra VMware. Lige fra netværk over hypervisor og automatiseringsmotor. Og det er i forbindelse med dette projekt at jeg har stiftet bekendtskab med det mest hypede VMware produkt i de sidste 2 år. Nemlig vSphere NSX. Denne artikel er den første af en række hvor jeg overordnet vil fortælle hvad NSX er og hvordan det bruges. I senere artikler vil vi så gå dybere ned i de enkelte dele af NSX.

NSX er en overlay teknologi til et traditionelt fysisk netværk. Ved hjælp af NSX flyttes en del traditionelle netværks services op i softwarelaget. Det være sig ting som switching, routing, load balancering og sågar firewall. Dette betyder at kravene til den underliggende netværksinfrastruktur kan holdes på et absolut minimum. Den afledte effekt er en frigørelse fra enkelte hardwareproducenter og deres specifikke hardware. Det man med en god marketingsterm kalder Netværksvirtualisering. Vi kan med andre ord gøre det for netværkslaget som vi for 10 år siden gjorde med serverlaget.

blog1

Hvad kan man så bruge det til?

NSX har en række forskellige brugsscenarier og de kunder der vil få de umiddelbare fordele vil være kunder med mange virtuelle maskiner og mange ændringer i deres netværkslag. Yderligere vil Service Providere være i målgruppen også.

Disse brugsscenarier tæller bland andet :

Selvbetjening og automatisering

Et meget stærkt brugsscenarie er muligheden for at automatisere udrulning af nye netværk og netværksrelaterede services. Integrationen til NSX foretages typisk via vRealize Orchestrator eller via REST API kald. Bruger man f.eks vRealize Automation er der allerede indbygget integration og udrulning kan herved bestilles via selvbetjeningsportalen.

Vi har i Proact lavet en række opgaver for kunder der bestiller isolerede udviklingsmiljøer, hvor brugere får udrullet sit eget netværkssegment med tilhørende virtuelle maskiner bag en firewall der ligeledes bliver rullet ud til formålet.

Hvilket naturligt bringer os videre til…..

Mikrosegmentering og sikkerhed

Traditionelt set har sikkerhed i mange installationer været kendetegnet ved at beskytte miljøet med en ydre firewall. En såkaldt perimeter firewall eller ydre firewall. Er man igennem denne firewall vil man ofte have fri adgang til ressourcer på indersiden.

Med VMware NSX er firewall funktionaliteten mere granuleret. Der kan både bruges ydre firewalls, i form af Edge Gateways Services som virker som en traditionel firewall, samt en distribueret firewall der virker på hver enkelt virtuelle netkort i hele installationen. På den måde kan regler sikres ned til hver enkelt netkort for en given VM. Den distribuerede firewall ligger som et kernel modul på hver ESX server i miljøet og skalerer i throughput med antallet af servere i miljøet.

Slide1

Tredjeparts integration

En af de yderligere styrker i NSX er det store partner økosystem der støtter op om NSX og leverer produkter der integrere direkte ind. Dette kan være alt lige fra load balancing med f.eks F5, til antivirus med Symantec eller advancerede firewall funktionalitet med Palo Alto Networks.

PaloAlto-1024x613

Alt i alt tilbyder VMware NSX en række muligheder for at frigøre dit netværk fra den underliggende hardware, ligesom den medfølgende sikkerhedsfunktionalitet helt sikkert vil være med til at styrke miljøer rundt i landet.

Som skrevet i indledningen er denne indledende blog post den første af en række hvor vi kommer til at gå mere i dybden med produktet.

NetApp OnCommand System Manager og Java 8.x problemer

OnCommand System Manager 2.2 og op til 3.1.2(RC1) virker ikke med Java 8.x

Har man det med at opdatere sin Java til nyeste version, hvilket man jo generelt skal, så oplever man en gang imellem at ens programmer ikke virker bagefter. NetApp OnCommand system manager er ingen undtagelse. Version 2.2 og op til 3.1.2(RC1) understøtter ikke Java 8.x. Den anbefalede version er Java 7.75

Nogle gange kan man godt ramme den rigtige kombination af versioner og stadig få System Manager til at virke med Java 8.31, selvom det ikke er supporteret. Men hvis det ikke virker og man ønsker at beholde sin Java fuldt opdateret, så er der en ganske nem måde at komme ud over det. I mappen med SystemManager.exe ligger der også en SystemManager.jar og den virker fint med Java 8.x

Så hvis man har problemer med at starte system manager på den normale måde, altså via genvejen på skrivebordet/start menuen så prøv at åbne jar filen direkte. Det skulle gerne starte system manager. Det er uigennemskueligt hvad exe filen evt. gør, ud over at kontrollere et par ting, og så kalde jar filen. Så ved at springe de checks over og bare starte jar filen direkte, starter system manager op.

Der er to muligheder til hvorfor at NetApp har ”valgt” at system manager ikke skal supportere Java 8, og det er, at de enten bruger nogle ting som har ændret sig mellem Java versionerne og de arbejder på en ny version. Og den anden mulighed er, at de har lavet den fejl som næsten alle programmører gør fra tid til anden og har hardcodet en path, som det ikke er hensigtsmæssigt at gøre.

Er det sidste tilfældet, er der i alt fald ikke nogen issues ved at bruge jar filen direkte uden om exe filen. Men der kan som sagt være nogle funktioner, som ikke virker, fordi de benytter nogle Java 7 features som opfører sig anderledes, når det eksekveres af en Java 8

NetApp forventer at komme med en version 3.1.2 i løbet af april eller maj som understøtter Java 8.

NetWorker Instant-Access recovery af VM’er

En af de federe tilføjelser i den nyeste version, 8.2, af EMC NetWorker med DataDomain integration er helt klart Instant Access. Instant Access giver dig mulighed for at boote en VM op helt ude på backup storage, altså på den DataDomain hvor du har gemt din backup, uden egentlig at skulle foretage en egentlig recover af din VM først.

Det giver dig mulighed for at få adgang til en service i din VM, eller trække filer ud du måtte have brug for inden du evt. laver en migrering af VM over i en produktions- datastore.

Som eksempel har jeg her taget en Oracle Enterprise Linux 6.6 VM, men pricippet fungerer for alle OS’er. VM skal naturligvis være tilmeldt en VBA data-protection policy. Hvis der ikke allerede er foretaget en fuld backup af VM, vælger vi at få udført en ved at gå ind i vcenter, vælge vores VBA, vælge Reports, vælge VM og starte en backup af vores VM:

choosevm

Når vi har en fuld backup af vores VM er vi sådan set klar til at foretage en Instant Access recovery. Vi bruger igen VBA administrationsdelen af vcenter (også kaldet EBR), og vælger her Restore. Vælg den VM du vil lave recover af i listen og klik på Instant Access:

choosevmtorecover

Vælg den fulde backup du vil foretage Instant Access recovery af, her har vi kun een.

chooseinstantaccess

Guiden foreslår selv en datastorelokation og et nyt navn til VM, kontroller at det ikke giver problemer, ret evt. til og fortsæt:

choosedest

Til sidst får du en bekræftelse på at Instant Access er sat i gang og kort tid efter(ja, vi snakker sekunder her!!) vil du se at din VM er genopstået i vcenter:

done

Du kan nu boote din VM, men husk lige at sikre at du ikke får lavet IP adressekonflikter eller andet inden du starter den.

En betingelse for Instant Access er at backups foretages til enten intern lagerplads i VBA eller DataDomain. Vælger man at gemme på DataDomain, kan savesets klones videre ud på f.eks. ADV_FILE eller bånd hvis man vil.

EMC annoncerer VSPEX Blue (EVO:RAIL)

VMware annoncerede på VMworld i august måned deres nye reference arkitektur kaldet EVO:RAIL. EVO:RAIL er en hyperkonvergeret appliance der består af processor, memory, storage, netværk og management.

Ideen med EVO:RAIL er at kombinere hardware med virtualisering i en samlet appliance, for at simplificere deployment og administration af den fysiske infrastruktur. Fra du tænder for boksen til du har en infrastruktur kørende og klar til at deploye virtuelle maskiner går der max 15 minutter.

En appliance indeholder 4 noder (ESXi hosts) og kan udvides med 3 yderligere appliances til 16 noder per cluster. Hver appliance er sizet til at kunne håndtere omkring 100 virtuelle maskiner – dog afhængig af størrelsen på hver enkelt VM. Et fuldt udbygget cluster kan således afvikle omkring 400 VMs.

VSPEX Blue Hardware specifikationer:

  • From factor: 2U
  • Nodes: 4 hot-swappable
  • Processors per node: Intel Ivy Bridge (up to 130W) dual processors
  • Memory per node: four channels of Native DDR3 (1333)
  • Memory configurations: 128GB or 192GB
  • Drives: up to 16 (four per node)
  • Drives per node: 1 x 2.5” SSD, 3 x 2.5” HDD
  • Drive capacities
    • HDD: 1.2TB (max total 14.4TB)
    • SSD for caching: 400GB (max total 1.6TB)
  • Power supply: 2 x 1200W (80 + CS Platoinum) redundant hot-swappable
  • Dimensions: 17.24”x30.35”x3.46”
  • I/Os per node:
    • Dual GbE ports
    • Optional IB QDR/FDR or 10GbE integrated
    • 1 x 8 PCIe Gen3 I/O Mezz Option (Quad GbE or 10GbE)
    • 1 x 16 PCIe Gen3HBA slots
    • Integrated BMC with RMM4 support

VSPEXBLUE_HW

Generelle EVO:RAIL software specifikationer:

  • EVO: RAIL Deployment, Configuration, and Management
  • VMware vSphere® Enterprise Plus, including ESXi for compute
  • Virtual SAN for storage
  • vCenter Server™
  • vCenter Log Insight™

Unikke EMC VSPEX Blue Value adds

  • EMC VSPEX Blue Manager
  • EMC VSPEX Blue Market
  • EMC VSPEX Blue support med ESRS
  • EMC VSPEX Blue RecoverPoint for VM
  • EMC Cloud Array Virtual Edition

EMC VSPEX Blue Manager er et samlet administrativt interface der håndtere både hardware og software konfigurationen. Se en demo af VSPEX Blue manager her.

EMC VSPEC Blue Market er en ”app store” der giver mulighed for at tilføje nye features. Tanken er at 3rd parts leverandører kan tilføje nye value-add features gennem EMC VSPEX Blue Market.

EMC VSPEX Blue support med ESRS er den gode gamle EMC Secure Remote Support som giver hurtig og nem adgang til EMC support for hurtig fejlsøgning.

EMC VSPEX RecoverPoint for VM giver mulighed for Per-VM replikering af VMs både lokal og remote. Til hver VSPEX Blue appliance følger der 15 Per-VM licenser til RecoverPoint for VM.

EMC Cloud Array Virtual Edition tilføjer 1TB local cache og 10TB cloud storage licenser, hvilket giver mulighed for public cloud objekt-baseret storage, som f.eks. Amazon AWS S3 eller vCloud Air mfl.

Læs mere om EMC VSPEX Blue på dette link.

Proact Danmark deltager på VMware’s Partnerkonference (PEX)

I disse dage afholder VMware deres årlige partner konference også kaldet VMware PEX. Det afholdes i San Francisco og som en del af Proacts virtuelle satsning sender vi flere fra Proacts virtuelle team afsted.

Proact er inviteret til Executive Roundtable, som er en ½ dags workshop med VMwares øverste ledelse. Det giver Proact en unik mulighed for at præge samarbejdet og bidrage til udviklingen af VMware fra et dansk synspunkt. Proact er inviteret som en af 2 danske partnere.

VMware har annonceret ”28 days of February event” hvor starten sker den 3/2 på VMware PEX. Ifølge VMware er dette den størst launch i VMware’s historie og vi ser frem til hvad dagene kommer til at byde på.

Fokus på PEX er omkring ”Software Defined Datacenter” men man kan også høre sessioner og nyheder omkring deres End User Computing strategi.

Proacts fokus

Proact Danmark er certificeret som både ”Solution Provider” og ”Service Provider” og vil have fokus på at følge sessioner og nyheder indenfor begge områder. Proact er signet op som Service Provider under vCloud Air Network programmet og har siden da arbejdet intenst på at få implementeret vores egen Hybrid Cloud lokaliseret på dansk grund. Proacts Hybrid Cloud er primært bygget på VMwares SDDC ”stack”, hvor man med en service portal hurtigt kan udvide ens private datacenter med yderligere ressourcer. Vi ser derfor frem til at følge de nyheder, som VMware vil annoncere i løbet af februar og specielt dem her i løbet af ugen på PEX.

Udover dette ser vi frem til at høre om NetApps og EMCs EVO:RAIL løsninger, da EVO:RAIL ser ud til at passe rigtigt godt til det danske marked.

Da det også er en teknisk konference er det planen at vores konsulenter tager et par certificeringer i løbet af ugen, udover at deltage i nogle tekniske bootcamps på blandt andet NSX og Software Defined Storage. Der er også rig mulighed for at prøve VMware produkter i deres ”Hands-on Labs”.

Produktmæssigt vil det især være vRealize Automation samt NSX der vil have vores fokus.

Vi vil de kommende dage forsøge her på bloggen at bringe nyheder fra PEX i det omfang det er muligt.