Are you ready for containers?
It is no secret that containers are becoming a central building block in the IT industry. Cloud Native Computing Forum estimates that 84% of their responders, predominantly developers, use containers in production. Gartner forecasts that “By 2023, more than 70% of global organizations will be running more than two containerized applications in production, up from less than 20% in 2019.” CIO states that “we’ve reached a tipping point; it’s time for enterprises worldwide to embark on a container strategy, or they just may miss the boat.” The Netrounds solution helps you to not "miss the boat" by driving the development of active assurance forward and allows you to use the latest technologies that make a difference for operators and enterprises.
Kubernetes and containers provide very valuable features for both operators and enterprise customers. It enables efficient use of underlying infrastructure: scales the parts of the network that need scaling, allows software to run on different operating systems, is supported by all major cloud platforms (AWS, GCP, Azure, etc.) and allows for rapid and dynamic installation of software – continuous delivery. Below are a few use cases, in addition to cloud deployments, in the service provider segment where containers are beginning to be used:
- 5G network: Rakuten and their partners announced that their 5G network strategy includes containers; “if you think about the 5G RAN core, it is different from the 4G RAN core ... from the 3GPP, it is implemented as a micro-services-based architecture.”
- Routers: Cisco NCS platform is one example where containers are beginning to be supported on network devices.
- Low-end devices: Also on low-end network devices, such as RGW, containers are becoming available.
While this is all fantastic, more capabilities also bring complexity into truly understanding customer experience. In today’s networks, not only applications and infrastructure need to be monitored and managed but also intercommunication among containers, their status, and parts of applications running inside the containers. The ability to both test and continuously monitor the performance from a customer perspective is imperative. Thus, these complex environments bring new network issues to overcome. The dynamic nature of containers forces service assurance projects to be automated and quick to give answers on how the services are working. The points of delivery (PoDs) can be instantiated anywhere and can be moved at any time, which makes tools built for static environments difficult to use. Container technologies also shield the underlying complexity in servers and networks, which has the side effect of making it more difficult to correlate customer perception with where the problem actually lies. In this environment, device- and infrastructure-centric monitoring is going to be even more questionable [read our white paper “Service Assurance: Time to Abandon the Geocentric Model” for a closer look at the difference between passive assurance systems and active assurance] when it comes to understanding customer experience, and therefore you will need to consider additional solutions. Below is an example of how complicated the underlying network can be.
Figure 1: Confirming Customer Experience in Containers with Netrounds' Cloud-Native Function (CNF)
To this end, Netrounds has developed the Test Agent Cloud-Native Function (TA CNF) to provide the possibility of end-to-end active monitoring for cloud-native infrastructures. In this blog post we will briefly introduce the Test Agent CNF and the features that are available. In subsequent blog posts we will address various use cases for our TA CNF. Also, in a later post we will provide insights on how you can use Netrounds TA CNF to test, monitor and troubleshoot your Kubernetes installation and what capabilities we have developed together with our partner, Cisco, using containers and features on the Cisco Network Convergence System (NCS) hardware platform.
Our Test Agent CNF is designed to be cloud-native. It is lightweight with a small footprint, fully stateless with no persistent storage requirements and usable within seconds. As a customer of ours has said, it is “silly simple” to get started with Netrounds’ solution.
Getting Started with TA CNF
All stateful services that make the architecture work reside in our Netrounds Control Center, and the flexible plugin architecture allows the image footprint to be optimized for different use cases. The TA CNF can be fully automated and orchestrated, and all of our APIs (UI, REST and NETCONF & YANG) as well as automation works out of the box. Silly simply, indeed.
Netrounds has set out to develop the Test Agent CNF to be state of the art: something that can help with the use cases brought up in the beginning of this blog. Currently, we are supporting Two-Way Active Measurement Protocol (TWAMP), single- and multiflow Transmission Control Protocol (TCP) and single- and multiflow User Datagram Protocol (UDP). In our next release we will extend the tools with support for video, both over-the-top (OTT) video and multicast-based broadcasting [Internet Protocol television (IPTV)], as well as a universal tool like Ping. In collaboration with Cisco we have also developed a proof-of-concept to perform Y.1564 tests in a Layer 2 network using software from Netrounds and a hardware-accelerated tool from Cisco, beautifully combining software, containers and hardware.
How can you try our new Test Agent CNF? If you have a Netrounds Control Center (NCC) or an account on our SaaS and you have a license that includes the right to use the TA CNF, you can always find our software on Docker Hub (https://hub.docker.com/u/netrounds). To get started, run the following commands (slightly modified with relevant NCC, accounts, user and tokens):
- docker pull netrounds/test-agent-application
- docker run --rm netrounds/test-agent-application register --ncc-host=app.netrounds.com --ssl-no-check --account=netrounds_ab --name=cTA-a --email@example.com --password=SecretPassword123
- docker run --rm netrounds/test-agent-application -k (secret key that was given in the previous step) --ncc-host=app.netrounds.com
Three commands is all it takes and this results in a TA CNF being deployed, literally within seconds. It is available in the Netrounds Control Center, ready to be used for active testing.
Figure 2: Live Test Agent in Docker Signal from Netrounds Control Center (NCC).
There are other deployment options for our TA CNF as well, which includes running natively on RHEL, Debian and Ubuntu.
Going back to the areas where containers are becoming available in:
- Private or public cloud and server infrastructure - Previously we have supported running a virtual machine Test Agent (TA VNF) alongside your server VM. But now you can deploy a TA CNF within the server itself and get even closer to the actual server performance.
- 5G network - By deploying our TA CNF in this infrastructure you will be able to monitor your network end-to-end, both using virtual machines and now also containers.
- Network Devices - With containers being supported on routers, where Cisco IOS XR is one example, this will allow you to use Netrounds capabilities without deploying additional hardware, even further removing the obstacles to a ubiquitous active test and monitoring solution that helps you increase the quality of your services. We will address this use case in a later blog.
- Low-end devices - Deploying Netrounds TA CNF in residential homes, on a RGW, would allow you to effectively answer the question if the performance issues are coming from their home environment or from the network. Issues in the home environment are usually the responsibility of the customer, while network issues are the operator’s responsibility. This delimitation is a major question that needs to be addressed, since it determines if the operator can and should take proactive measures.
- And finally, Kubernetes deployment - There are a lot of monitoring solutions for Kubernetes; however, Netrounds brings a genuine approach to the data plane by generating synthetic traffic to emulate the communication among microservices deployed in Kubernetes. We will address this capability in a later blog, so stay tuned.
To make sure that you don't miss part 2, subscribe to our blog to be notified when the next post is published.