Digital Trasformation & Data Mobility

If Cloud has been the most used word in the last five years, the words that have been buzzing the IT world in the last five months are Digital Transformation

From Wikipedia:

Digital Transformation (DT or DX) is the adoption of digital technology to transform services and businesses, through replacing non-digital or manual processes with digital processes or replacing older digital technology with newer digital technology”.

Or: Digital Transformation must help companies to be more competitive through the fast deployment of new services always aligned with business needs.

Note 1: Digital transformation is the basket, technologies to be used are the apples, services are the means of transport, shops are clients/customers.

1. Can all the already existing architectures work for Digital Transformation?

  • I prefer to answer rebuilding the question with more appropriate words:

2. Does Digital transformation require that data, applications, and services move from and to different architectures?

  • Yes, this is a must and It is called Data Mobility

Note 2: Data mobility regards the innovative technologies able to move data and services among different architectures, wherever they are located.

3. Does Data-Mobility mean that the services can be independent of the below Infrastructure?

  • Actually, it is not completely true; it means that, despite nowadays there is not a standard language allowing different architecture/infrastructure to talk to each other, the Data-mobility is able to get over this limitation.

4. Is it independent from any vendors?

  • When a standard is released all vendors want to implement it asap because they are sure that these features will improve their revenue. Currently, this standard doesn’t still exist.

    Note 3: I think the reason is that there are so many objects to count, analyze, and develop that the economical effort to do it is at the moment not justified

    5. Is already there a Ready technology “Data-Mobility”?

    The answer could be quite long but, to do short a long story, I wrote the following article that is composed of two main parts:

  • Application Layer (Container – Kubernetes)
  • Data Layer (Backup, Replica)

Application Layer – Container – Kubernetes

Using Local Persistent Volumes with Container Engine for Kubernetes | IaaS Blog - Oracle Cloud Infrastructure News

In the modern world, services are running in a virtual environment (VMware, Hyper-V, KVM, etc).

There are still old services that run on legacy architecture (Mainframe, AS400 ….), (old doesn’t mean that they are not updated but just they have a very long story)

In the next years, the services will be run in a special “area” called “container“.

The container runs on Operating System and can be hosted in a virtual/physical/Cloud architecture.

Why containers and skills on them are so required?

There are many reasons and I’m listing them in the next rows.

  1. The need of IT Managers is to move data among architectures in order to improve resilience and lower costs.
  2. The Container technology simplifies the developer code writing because it has a standard widely used language.
  3. The services ran on the container are fast to develop, update and change.
  4. The container is de facto a new standard that has a great advantage. It gets over the obstacle of missing standards among architectures (private, hybrid, and public Cloud).

A deep dive about point d.

Any company has its own core business and in the majority of cases, it needs IT technology.

Any size of the company?
Yes, just think about your personal use of the mobile phone, maybe to book a table at the restaurant or buying a ticket for a movie. I’m also quite sure it will help us get over the Covid threat.

This is the reason why I’m still thinking that IT is not a “cost” but a way to get more business and money improving efficiency in any company.

Are there specif features to allow the data-mobility in the Kubernetes environment?

Yes and for giving you a great example please have a look at the Kasten K10 product because it has many and advanced features (The topic will be well covered in the next articles because it is one of my great working passions).

Data-LayerCloud Backup Restore Icona - Download gratuito, PNG e vettoriale

What about services that can’t be containerized?

Is there a simple way to move data among different architectures?

Yes, that’s possible using copies of the data of VMs, Physical Servers.

In this business scenario, it’s important that the software can create Backup/Replicas wherever the workloads are located.

Is it enough?

From my point of view, it is not. The software has to be able to restore data intra-architectures.

For example, a customer can need to restore some on-premises workloads of his VMware architecture in a public cloud,  or restore a backup of a VM located in a public cloud to a Hyper-V on-premises environment.

In other words, working with Backup/Replica and restore in a multi-cloud environment.

The next pictures show the Data Process.

I called it “The cycle of Data” because leveraging from a copy it is possible to freely move data from and to any Infrastructure (Public, hybrid, private Cloud).

Pictures 1 and 2 are just examples of the data-mobility concept. They can be modified by adding more platforms.

The starting point of Picture 1 is a backup on-premises that can be restored on-premises and on-cloud. Picture 2 shows backup of a public cloud workload restored on cloud or on-premises.

It’s an open circle where data can be moved around platforms.

Note 4: A good suggestion is to use data-mobility architecture to set up a cold disaster recovery site (cold because data used to restore site are backup).

Picture 1

Picture 2

There is one more point to complete this article and it is the Replica feature.

Note 5: For Replica I intend the way to create a mirror of the production workload. Comparing to backup, in this scenario the workload can be switched-on without any restore operation because it is already written in the language of the host-hypervisor.

The main scope of replica technology is to create a hot Disaster Recovery site.

More details about how to orchestrate DR are available on this site at the voice Veeam Availability Orchestrator (Now Veeam Disaster Recovery Orchestrator)

The replica can be developed with three different technologies: 

  • Lun/Storage replication
  • I/O split
  • Snapshot based

I’m going to cover those scenarios and kasten k10 business cases in future articles.

That’s all for today folks.

See you soon, and take care.

Ransomware defense part 2: Hardening

There are many documents on the internet that describe how to address this common request.

In this article, I’ll give you a track to move easier around this topic pointing out the most interesting articles.

Before starting let me thank Edwin Weijdema who created an  exhaustive guide to answer the common question (please click here to get it)

Are you ready? Let’s start

1- The first magic point for starting is Wikipedia where I got a good definition:

In computinghardening is usually the process of securing a system by reducing its surface of vulnerability, which is larger when a system performs more functions; in principle, a single-function system is more secure than a multipurpose one. Reducing available ways of attack typically includes changing default passwords, the removal of unnecessary software, unnecessary usernames or logins, and the disabling or removal of unnecessary services.

2- The second point is to understand the concept of Perimeter security:

It is natural barriers or artificially built fortifications that have the goal of keeping intruders out of the area . The strategies can be listed as:

  • Use rack-mount servers
  • Keep intruders from opening the case
  • Disable the drives
  • Lock up the server room
  • Set up surveillance

A complete article is available by clicking here

3- The third point is  Network segmentation:

It is the division of an organization network into smaller and, consequently, a more manageable grouping of interfaces called zones. These zones consist of IP ranges, subnets, or security groups designed typically to boost performance and security.

In the event of a cyberattack, effective network segmentation will confine the attack to a specific network zone and contain its impact by blocking lateral movement across the network via logical isolation through access controls.

Designating zones allows organizations to consistently track the location of sensitive data and assess the relevance of an access request based on the nature of that data.  Designating where sensitive data reside permits network and security operations to assign resources for more aggressive patch management and proactive system hardening.

A complete article is available by clicking here

4- Hardening your Backup Repositories

The next good rules involve your backup architecture and in specific the Backup Repositories:

Windows:
a. Use the built-in local administrator account
b. Set permissions on the repository directory

c. Modify the Firewall

d. Disable remote RDP services

  • Linux:
  • e. Create a Dedicated Repository Account
  • f. Set Permissions on the Repository Directory
  • g. Configure the Linux Repository in VeeamModify the Firewall
  • h. Use Veeam Encryption

Do you want to know more about security? If so the Veeam Best Practices are for sure the answer.

The next article will cover monitoring and automatic actions using Veeam-ONE.

5- Prevent injection of shady boot code

Code injection, also called Remote Code Execution (RCE), occurs when an attacker exploits an input validation flaw in software to introduce and execute malicious code.

To prevent the attack please follow the following rules:

a. Run with UEFI Native Mode
b. Use UEFI with Secure Boot Standard Mode
c. Combine Secure Boot with TPM
d. Equip critical servers with a TPM 2.0

Stay tuned and see you soon

Ransomware defense – part 1: Advanced product features are an mandatory requirement

A lot of new challenges came to people who work in IT-Departments these last months.

The number of ransomware attacks has been growing day by day and their attack strategies are becoming more and more evil and dangerous.

The common questions the Managers ask the IT guys are:

a) Are the company protected against these risks?

A good answer is that a successful approach is when the percentage of certainty is more than the percentage of risk.

b) Which are the best practices to be safer?

The key is defining the right process of protection.

The scope of these articles is showing the correct behavior to keep your architecture as safer as possible or, in case of attack, gain as much time as possible to fend off the assault.

The articles will cover the storage point of view and do not deal with perimetral defenses, antimalware, antiviruses, networking strategies, and so on.

Which are the main strategies to adopt?

  1. Having more copies of your data
  2. Hardening the infrastructure
  3. Monitoring behaviors

Are you ready? Let’s start with the first topic !!!

    1. Having more copies of  your data:

Backup software is the right tool to score the goals of this first part.

It has to be able to:

a) Create application consistency backup.

b) Copy backup data to different locations.

Almost all backup software can do that but some additional features can address better the biggest challenges:

Flexible: Backup software should write backup data to different types of repositories and be able to restore it without any required dependency. To be clearer, the backup data have to be self-consistent. The advantage is being able to fit different architecture scenarios (Let’s call it “Data mobility”).

Data-Offline:  back up data should be put into a “quarantine” area where they cannot be either re-written or read. The classic deployment is a Tape Devices architecture or any scripts that automatically detach the repository devices.

Immutability: The backup data cannot be changed until the immutability period is over. This has a double advantage in comparison to data-offline strategy: It changes the repository status as written & online just for the new backup file. It is offline (as Tape technologies) for re-writing to already present backup data. The speed restore option has to remain unchanged.

Immutability can be reached in two ways:

By WORM  (Write Once, Read Many) devices, where the backup files can be used just to restore once they have been added to repositories. For example, technology can be the optical disk, a technology I have been working on in the past.

At Veeam Software this common customer and partner request has been addressed using the immutability propriety of the Object Storage. The good news is that VBR v. 11 implements this great feature directly in Linux Repositories.

Is this enough? I’m still thinking that the backup solution should at least be able to:

  • Check the backup file and the backup content. The only way to check if a backup file is really reusable is restoring it in a separate area where communication with the production environment is forbidden. At Veeam it is called Sure-Backup.
  • Check with your anti-virus/anti-malware that the backup files have not been already attacked somewhere and sometime. At Veeam the technology used is the Data integration API.
  • Before restoring files or VMs in production, check with your anti-virus/anti-malware if your data has been already attacked. At Veeam it is called Secure Restore
  • Perform Replica Jobs. It helps to create a Disaster Recovery Site useful in performing a quick restart of the service.  At Veeam this feature is included from the beginning and the Sure-Backup can be applied with replica too (it is called Sure-Replica). V.11 has a very powerful feature: CDP.
  • Restore backup data to the public cloud when the primary and replication site is totally out of order. I call it Cold Disaster Recovery and it needs at least one restore point available.

The next article topic is how to hardening your backup architecture

See you soon and take care!