Category Archives: Cloud

Cloud Modern Application

Modern Applications – Episod 2: Ports & Networking

As written in the last article a container can manage more images.

Picture 1 shows an example of three different workloads running in a single container.

Picture 1

It’s possible to work with different versions of the same image also.

For example, MySQL has several images that can be installed and run to the same container.

Note 1: Nowadays MySQL available images are:

  • 8.0.25, 8.0, 8, latest
  • 5.7.34, 5.7, 5
  • 5.6.51, 5.6

Picture 2 shows a container where three different images run with two kinds of version applications.

Picture 2

Let’s digress slightly talking about how a service is built.

Most of the time it is made by grouping applications that means grouping several types of images.

The question is: How do images talk to each other?

The answer is quite easy. They talk through the networks, where IP addresses and ports are in charge of the communication to and from the applications (picture 3).

Picture 3

There is just a simple rule to remember when a container network architecture is deployed.

As shown in picture 4, if the ports used by a running image can be the same for different applications (in example 161616), the port assigned to the back-end server must be always different (4000,40001,4002).

Note 2: The port numbers are just an example also because the port with the higher number is 216 = 65535.

Picture 4

Wrap-up: The binding network architecture is completely allowed but the host back-end port can’t expose the same port number to more than one service.

Let’s go deeper into networking in the Container environment:

The network’s topology is defined by the used drivers.

They can be:

1. Host

When the container comes up it attaches its ports to the host network directly.

In this way, it shares the TCP/IP stack and the Host NameSpace.

The segregation is guaranteed by Docker technology (Picture 5)

Picture 5

2. Bridge

This is the default network mode.

It creates an isolated bridge network where the containers run inside a range of IP addresses.

In the previous scenario, the containers can talk to each other but no connection is allowed from outside.

To allow communication with external service in Docker, it’s necessary to start docker with the -p option.

docker run -pserverport:containerport nameservice (ie: docker run -p2400:2451 mysql)

port 2400 is now working with 2451

From a security point of view, it is amazing. You can monitor and select which ports are going to be used for a service (Picture 6)

Picture 6

3. Overlay

If the previous technologies are single-host networking topology, the Overlay allows communication among the container hosted in different hosts.

This scenario requires cluster intelligence to manage the traffic and guarantee segregation. It could be Swarm or Kubernetes (picture 7)

The technology core that allows it is vxlan that creates a tunnel on top of the underlay network and it is part of the operating system

The traffic is encrypted (AES) with a rotating password.

When a service is exposed (-p option wrote before), all traffic is automatically routed, nevermind where the service is running

More interesting details: each container has two IP addresses: the first one insists on the overlay network and is used by the containers to talk to each other (internal). The second address is for vxlan and allows the traffic to outside.

Picture 7

4. Null (Black box)

No network connection

5. MacVLan

It’s possible to implement a MacVLan through a driver. The scope is giving to the network container the behaviour of a traditional network. It’s necessary that the network accepts the promiscuous mode.

That’s all for now. Take care and see you soon.

Modern Applications – Episod 1: Foundamentals

Introduction

This is the first of a group of articles about the technologies that can modernize the applications.

The scope is helping the reader to understand the potentiality of this new way to make business allowing the Companies to be more competitive.

These articles follow my personal approach and studies of Kubernetes.

I’m paying attention to how to make services available and protected by exploiting internal and external native technologies

Let’s start !!!

What is a container

It’s a way to package the applications with their pertinent dependencies and configurations in just one block.

There are at least two big advantages of this approach:

  • The container for his native architecture is portable. It means you can run it in any architecture wherever they are located. (please read the previous article about Digital Transformation and Cloud Mobility)
  • Deploying services prove easier and more efficient than in the traditional world because there are already plenty of software images ready to be used.

Where can I download images to run to the containers?

There are public and private Repositories (please do not mess it with a VBR Repository).

The most famous container technology is Docker that has a public repository called docker hub.

What is a container exactly?

A container allows isolated images to run to an operating system.

Container vs Virtual Machine

The difference between the two architecture seems to be very tiny but actually, they represent two worlds.

The two technologies are virtualization tools but if Docker focuses on the applications layer (picture 1),  VM puts its attention to Kernel and application (picture 2)

Picture 1

Picture 2

Which are the main advantages of this new approach:

  • The container has a small footprint (few MB compare to GB).
  • The boot is faster.
  • Easier compatibility list.
  • It can run in all common operating systems, such as Windows, Mac-OS, Linux.

Container vs Image

It’s crucial to the next articles to have very clear the difference between a container and an image.

Let’s help ourselves through picture #3 that shows the application composition.

There are four main elements:

  1. Image: It’s the code written by developers. It is downloaded from Repositories.
  2. Configuration: It represents the setup created to allow the application to run.
  3. File System: It’s the place where the application and its data are stored.
  4. Network: It allows all components to talk to each other.

The container is where the application runs.

Picture 3

Note 1: Images are part of the container. Think of the container as a multitasking OS specialized to run applications simultaneously.

Note 2: To get info about Docker, please refer to the official website.                I.E.: to run an image just launch the following command:                                  docker run image-name

Note 3: There are more Container technologies; the most common are:

  • RTK (CoreOS)
  • LXC
  • LXD (Canonical)
  • Linux VServer
  • OpenVZ/Virtuozzo 7
  • runC

That’s all for now,  see you soon and take care.

Digital Trasformation & Data Mobility

If Cloud has been the most used word in the last five years, the words that have been buzzing the IT world in the last five months are Digital Transformation

From Wikipedia:

Digital Transformation (DT or DX) is the adoption of digital technology to transform services and businesses, through replacing non-digital or manual processes with digital processes or replacing older digital technology with newer digital technology”.

Or: Digital Transformation must help companies to be more competitive through the fast deployment of new services always aligned with business needs.

Note 1: Digital transformation is the basket, technologies to be used are the apples, services are the means of transport, shops are clients/customers.

1. Can all the already existing architectures work for Digital Transformation?

  • I prefer to answer rebuilding the question with more appropriate words:

2. Does Digital transformation require that data, applications, and services move from and to different architectures?

  • Yes, this is a must and It is called Data Mobility

Note 2: Data mobility regards the innovative technologies able to move data and services among different architectures, wherever they are located.

3. Does Data-Mobility mean that the services can be independent of the below Infrastructure?

  • Actually, it is not completely true; it means that, despite nowadays there is not a standard language allowing different architecture/infrastructure to talk to each other, the Data-mobility is able to get over this limitation.

4. Is it independent from any vendors?

  • When a standard is released all vendors want to implement it asap because they are sure that these features will improve their revenue. Currently, this standard doesn’t still exist.

    Note 3: I think the reason is that there are so many objects to count, analyze, and develop that the economical effort to do it is at the moment not justified

    5. Is already there a Ready technology “Data-Mobility”?

    The answer could be quite long but, to do short a long story, I wrote the following article that is composed of two main parts:

  • Application Layer (Container – Kubernetes)
  • Data Layer (Backup, Replica)

Application Layer – Container – Kubernetes

Using Local Persistent Volumes with Container Engine for Kubernetes | IaaS Blog - Oracle Cloud Infrastructure News

In the modern world, services are running in a virtual environment (VMware, Hyper-V, KVM, etc).

There are still old services that run on legacy architecture (Mainframe, AS400 ….), (old doesn’t mean that they are not updated but just they have a very long story)

In the next years, the services will be run in a special “area” called “container“.

The container runs on Operating System and can be hosted in a virtual/physical/Cloud architecture.

Why containers and skills on them are so required?

There are many reasons and I’m listing them in the next rows.

  1. The need of IT Managers is to move data among architectures in order to improve resilience and lower costs.
  2. The Container technology simplifies the developer code writing because it has a standard widely used language.
  3. The services ran on the container are fast to develop, update and change.
  4. The container is de facto a new standard that has a great advantage. It gets over the obstacle of missing standards among architectures (private, hybrid, and public Cloud).

A deep dive about point d.

Any company has its own core business and in the majority of cases, it needs IT technology.

Any size of the company?
Yes, just think about your personal use of the mobile phone, maybe to book a table at the restaurant or buying a ticket for a movie. I’m also quite sure it will help us get over the Covid threat.

This is the reason why I’m still thinking that IT is not a “cost” but a way to get more business and money improving efficiency in any company.

Are there specif features to allow the data-mobility in the Kubernetes environment?

Yes and for giving you a great example please have a look at the Kasten K10 product because it has many and advanced features (The topic will be well covered in the next articles because it is one of my great working passions).

Data-LayerCloud Backup Restore Icona - Download gratuito, PNG e vettoriale

What about services that can’t be containerized?

Is there a simple way to move data among different architectures?

Yes, that’s possible using copies of the data of VMs, Physical Servers.

In this business scenario, it’s important that the software can create Backup/Replicas wherever the workloads are located.

Is it enough?

From my point of view, it is not. The software has to be able to restore data intra-architectures.

For example, a customer can need to restore some on-premises workloads of his VMware architecture in a public cloud,  or restore a backup of a VM located in a public cloud to a Hyper-V on-premises environment.

In other words, working with Backup/Replica and restore in a multi-cloud environment.

The next pictures show the Data Process.

I called it “The cycle of Data” because leveraging from a copy it is possible to freely move data from and to any Infrastructure (Public, hybrid, private Cloud).

Pictures 1 and 2 are just examples of the data-mobility concept. They can be modified by adding more platforms.

The starting point of Picture 1 is a backup on-premises that can be restored on-premises and on-cloud. Picture 2 shows backup of a public cloud workload restored on cloud or on-premises.

It’s an open circle where data can be moved around platforms.

Note 4: A good suggestion is to use data-mobility architecture to set up a cold disaster recovery site (cold because data used to restore site are backup).

Picture 1

Picture 2

There is one more point to complete this article and it is the Replica feature.

Note 5: For Replica I intend the way to create a mirror of the production workload. Comparing to backup, in this scenario the workload can be switched-on without any restore operation because it is already written in the language of the host-hypervisor.

The main scope of replica technology is to create a hot Disaster Recovery site.

More details about how to orchestrate DR are available on this site at the voice Veeam Availability Orchestrator (Now Veeam Disaster Recovery Orchestrator)

The replica can be developed with three different technologies: 

  • Lun/Storage replication
  • I/O split
  • Snapshot based

I’m going to cover those scenarios and kasten k10 business cases in future articles.

That’s all for today folks.

See you soon, and take care.