Architecture Documentation

1.1 Use Cases

Juniper EX switches configuration examples
Failed to parse community string under policy-options: Description The chassis process chassisd could not initialize the voltage sensor for the indicated component field-replaceable unit, for FRU. This helps assure its independence. The PE router originated one of the multihomed advertisements and selected its own advertisement as the best path. Page 70 The master Routing Engine on each T routing node in a routing matrix forwards to the master Routing Engine on the TX Matrix platform all messages with a severity of info and higher. Shutdown Session Tool Add-on. Unable to open 'bus-type' bus device, error error-message error-code Description The chassis process chassisd could not open the indicated bus device for the indicated reason.

Cisco ASA VPN reporting with EventLog Analyzer

Parity Consultants

Cisco ASA traffic monitoring. Credit Union of Denver has been using EventLog Analyzer for more than four years for our internal user activity monitoring. EventLog Analyzer provides great value as a network forensic tool and for regulatory due diligence. This product can rapidly be scaled to meet our dynamic business needs. The best thing, I like about the application, is the well structured GUI and the automated reports.

This is a great help for network engineers to monitor all the devices in a single dashboard. The canned reports are a clever piece of work. EventLog Analyzer has been a good event log reporting and alerting solution for our information technology needs. It minimizes the amount of time we spent on filtering through event logs and provides almost near real-time notification of administratively defined alerts.

I love the alerts feature of the product. We are able to send immediate alerts based on pretty much anything we can think of. We send alerts when certain accounts login, or when groups are changed, etc. That has been very helpful. Also the automatic archive of the log files has been very helpful and has taken the worry out of keeping old logs.

The OpenContrail Controller is a logically centralized but physically distributed Software Defined Networking SDN controller that is responsible for providing the management, control, and analytics functions of the virtualized network. The OpenContrail vRouter is a forwarding plane of a distributed router that runs in the hypervisor of a virtualized server.

It extends the network from the physical routers and switches in a data center into a virtual overlay network hosted in the virtualized servers the concept of an overlay network is explained in more detail in section 1. The OpenContrail vRouter is conceptually similar to existing commercial and open source vSwitches such as for example the Open vSwitch OVS but it also provides routing and higher layer services hence vRouter instead of vSwitch.

The OpenContrail Controller provides the logically centralized control plane and management plane of the system and orchestrates the vRouters.

Virtual networks are logical constructs implemented on top of the physical networks. Virtual networks are used to replace VLAN-based isolation and provide multi-tenancy in a virtualized data center. Each tenant or an application can have one or more virtual networks. Each virtual network is isolated from all the other virtual networks unless explicitly allowed by security policy.

How this is achieved using virtual networks is explained in detail in section 2. Virtual networks can be implemented using a variety of mechanisms. Virtual Networks can also be implemented using two networks — a physical underlay network and a virtual overlay network. This overlay networking technique has been widely deployed in the Wireless LAN industry for more than a decade but its application to data-center networks is relatively new.

It is being standardized in various forums such as the Internet Engineering Task Force IETF through the Network Virtualization Overlays NVO3 working group and has been implemented in open source and commercial network virtualization products from a variety of vendors. An ideal underlay network provides uniform low-latency, non-blocking, high-bandwidth connectivity from any point in the network to any other point in the network. The underlay physical routers and switches do not contain any per-tenant state: The forwarding tables of the underlay physical routers and switches only contain the IP prefixes or MAC addresses of the physical servers.

Gateway routers or switches that connect a virtual network to a physical network are an exception — they do need to contain tenant MAC or IP addresses. The vRouters, on the other hand, do contain per tenant state. They contain a separate forwarding table a routing-instance per virtual network.

That forwarding table contains the IP prefixes in the case of a layer 3 overlays or the MAC addresses in the case of layer 2 overlays of the virtual machines. No single vRouter needs to contain all IP prefixes or all MAC addresses for all virtual machines in the entire data center. A given vRouter only needs to contain those routing instances that are locally present on the server i.

Various control plane protocols and data plane protocols for overlay networks have been proposed by vendors and standards organizations. The control plane protocol between the control plane nodes of the OpenContrail system or a physical gateway router or switch is BGP and Netconf for management. The fact that the OpenContrail System uses control plane and data plane protocols which are very similar to the protocols used for MPLS L3VPNs and EVPNs has multiple advantages — these technologies are mature and known to scale, they are widely deployed in production networks, and supported in multi-vendor physical gear that allows for seamless interoperability without the need for software gateways.

OpenContrail is designed to operate in an open source cloud environment. In order to provide a fully integrated end-to-end solution:. OpenContrail is available under the permissive Apache 2. Juniper Networks also provides a commercial version of the OpenContrail System. Commercial support for the entire open source stack not just the OpenContrail System, but also the other open source components such as for example OpenStack is available from Juniper Networks and its partners.

Earlier we mentioned that the OpenContrail Controller is logically centralized but physically distributed. Physically distributed means that the OpenContrail Controller consists of multiple types of nodes, each of which can have multiple instances for high availability and horizontal scaling. Those node instances can be physical servers or virtual machines.

For minimal deployments, multiple node types can be combined into a single server. There are three types of nodes:. The physically distributed nature of the OpenContrail Controller is a distinguishing feature.

Because there can be multiple redundant instances of any node, operating in an active-active mode as opposed to an active-standby mode , the system can continue to operate without any interruption when any node fails. When a node becomes overloaded, additional instances of that node type can be instantiated after which the load is automatically redistributed. This prevents any single node from becoming a bottleneck and allows the system to manage very large-scale system — tens of thousands of servers.

Logically centralized means that OpenContrail Controller behaves as a single logical unit, despite the fact that it is implemented as a cluster of multiple nodes. Data models play a central role in the OpenContrail System. A data model consists of a set of objects, their capabilities, and the relationships between them. The data model permits applications to express their intent in a declarative rather than an imperative manner, which is critical in achieving high programmer productivity.

Thus applications can be treated as being virtually stateless. The most important consequence of this design is that individual applications are freed from having to worry about the complexities of high availability, scale, and peering. There are two types of data models: The high-level service data model describes the desired state of the network at a very high level of abstraction, using objects that map directly to services provided to end-users — for example, a virtual network, or a connectivity policy, or a security policy.

The low-level technology data model describes the desired state of the network at a very low level of abstraction, using objects that map to specific network protocol constructs such as for example a BGP route-target, or a VXLAN network identifier.

The configuration nodes are responsible for transforming any change in the high-level service data model to a corresponding set of changes in the low-level technology data model.

The control nodes are responsible for realizing the desired state of the network as described by the low-level technology data model using a combination of southbound protocols including XMPP, BGP, and Netconf. It is also horizontally scalable because the API load can be spread over multiple configuration node instances.

The initial version of the OpenContrail System ships with a specific high-level service data model, a specific low-level technology data model, and a transformation engine to map the former to the latter.

Furthermore, the initial version of the OpenContrail System ships with a specific set of southbound protocols. The high-level service data model that ships with the initial version of the OpenContrail System models service constructs such as tenants, virtual networks, connectivity policies, and security policies. These modeled objects were chosen to support to initial target use cases namely cloud networking and NFV.

The low-level service data model that ships with the initial version of the OpenContrail System is specifically geared towards implementing the services using overlay networking. New southbound protocols can be introduced into the control nodes. This may be needed to support new types of physical or virtual devices in the network that speak a different protocol; for example the Command Line Interface CLI for a particular network equipment vendor could be introduced.

Or this may be needed because new objects are introduced in the low-level technology data models that require new protocols to be implemented. As shown below in Figure 1, the OpenContrail System consists of two parts: These APIs are used for integration with the cloud orchestration system, for example for integration with OpenStack via a neutron formerly known as quantum plugin.

The OpenContrail System provides three interfaces: The vRouters should be thought of as network elements implemented entirely in software. They are responsible for forwarding packets from one virtual machine to other virtual machines via a set of server-to-server tunnels.

The tunnels form an overlay network sitting on top of a physical IP-over-Ethernet network. Each vRouter consists of two parts: We now turn to the internal structure of the system.

As shown in Figure 2, the system is implemented as a cooperating set of nodes running on general-purpose x86 servers. Each node may be implemented as a separate physical server or it may be implemented as a Virtual Machine VM. All nodes of a given type run in an active-active configuration so no single node is a bottleneck. This scale out design provides both redundancy and horizontal scalability. In addition to the node types which are part of the OpenContrail Controller, we also identify some additional nodes types for physical servers and physical network elements performing particular roles in the overall OpenContrail System:.

For clarity, the figure does not show physical routers and switches that form the underlay IP over Ethernet network. There is also an interface from every node in the system to the analytics nodes.

This interface is not shown in Figure 2 to avoid clutter. The compute node is a general-purpose x86 server that hosts VMs. Those VMs can be tenant VMs running customer applications such as web servers, database servers, or enterprise applications. Or those VMs can be host virtualized services use to create service chains. The vRouter forwarding plane sits in the Linux Kernel; and the vRouter Agent is the local control plane. This structure is shown in Figure 3 below. Two of the building blocks in a compute node implement a vRouter: These are described in the following sections.

The vRouter agent is a user space process running inside Linux. It acts as the local, light-weight control plane and is responsible for the following functions:.

Each vRouter agent is connected to at least two control nodes for redundancy in an active-active redundancy model. The vRouter forwarding plane runs as a kernel loadable module in Linux and is responsible for the following functions:. The vRouter forwarding plane currently only supports IPv4. Support for IPv6 will be added in the future. Figure 6 shows the internal structure of a configuration node.

The configuration node communicates with the Orchestration system via a REST interface, with other configuration nodes via a distributed synchronization mechanism, and with control nodes via IF-MAP.

Configuration nodes also provide a discovery service that the clients can use to locate the service providers i. For example, when the vRouter agent in a compute node wants to connect to a control node to be more precise: Figure 7 below shows the internal structure of an analytics node. An analytics node communicates with applications using a north-bound REST API, communicates with other analytics nodes using a distributed synchronization mechanism, and with components in control and configuration nodes using an XML-based protocol called Sandesh designed specifically for handling high volumes of data.

Sandesh carries two kinds of messages: All information gathered by the collector is persistently stored in the NoSQL database. No filtering of messages is done by the information source. A single GET request and a single corresponding CLI command in the client application can be mapped to multiple request messages whose results are combined. The query engine is implemented as a simple map-reduce engine. The vast majority of OpenContrail queries are time series.

The forwarding plane is implemented using an overlay network. The overlay network can be a layer-3 IP overlay network or a layer-2 Ethernet overlay network. For layer-3 overlays, initially only IPv4 is supported; IPv6 support will be added in later releases. Layer-3 overlay networks support both unicast and multicast. This is shown in Figure One of the main advantages of the VXLAN encapsulation is that it has better support for multi-path in the underlay by virtue of putting entropy a hash of the inner header in the source UDP port of the outer header.

First , it only implements the packet encapsulation part of the IETF draft; it does not implement the flood-and-learn control plane; instead it uses the XMPP-based control plane described in this chapter; as a result, it does not require multicast groups in the underlay. For a more detailed description see [draft-ietf-l3vpn-end-system]. The following description assumes IPv4, but the steps for IPv6 are similar. Now we return to the part that we glossed over in step 7: Forwarding for L2 overlays works exactly the same as forwarding for L3 overlays as described in the previous section, except that:.

OpenContrail supports a hybrid mode where a virtual network is both a L2 and a L3 overlay simultaneously. OpenContrail supports IP multicast in L3 overlays. The multicast elaboration is performed using multicast trees in the overlay or using multicast trees in the underlay. OpenContrail does multicast elaboration using multicast trees in the overlay instead of the underlays. The details are described in [draft-marques-l3vpn-mcast-edge] ; here we only summarize the basic concepts. Figure 15 illustrates the general concept of creating multicast trees in the overlay.

The vRouter at the root of the tree sends N copies of the traffic to N downstream vRouters. Those downstream vRouters send the traffic to N more downstream vRouters, and so on, until all listener vRouters are covered.

In this example N equals 2. The number N does not have to be the same at each vRouter. The details of the protocol are too complicated to describe here; see [draft-marques-l3vpn-mcast-edge] for details. Ingress replication, shown in Figure 16, can be viewed as a special degenerate case of general overlay multicast trees. In practice, however, the signaling of ingress replication trees is much simpler than the signaling of general overlay multicast trees.

An alternative approach is to do multicast elaboration using multicast trees in the underlay as shown in Figure Also, the flood-and-learn control plane for VXLAN described in [draft-mahalingam-dutt-dcops-vxlan] relies on underlay multicast trees. The underlay multicast tree is implemented as GRE tunnel with a multicast destination address.

This implies that the underlay network must support IP multicast, it must run some multicast routing protocol typically Protocol Independent Multicast PIM , and it must have one multicast group per underlay multicast tree.

Multicast trees in the underlay require IP multicast support on the data center switches. In practice this can be a problem for a number of reasons:. In OpenContrail, unknown unicast traffic is dropped instead of being flooded because the system does not rely on flood-and-learn to fill the MAC tables. Instead, it uses a control plane protocol to fill the MAC tables and if the destination is not known, there is some other malfunction in the system.

L2 broadcasts are also avoided because most L2 broadcasts are caused by a small set of protocols. For any remaining L2 broadcast and multicast, the system creates one distribution tree per virtual network connecting all routing instances for that virtual network. That tree can be constructed in either the overlay or in the underlay with the same pros and cons for each approach.

The vRouter proxies several types of traffic from the VM and avoids flooding. The control node sends the response back over XMPP. The vRouter forwarding plane contains a flow table for multiple different functionality — firewall policies, load balancing, statistics, etc.

The flow table contains flow entries that have a match criteria and associated actions. The match criteria can be a N -tuple match on received packets wildcard fields are possible. The actions include dropping the packet, allowing the packet, or redirecting it to another routing instance. The flow entries are programmed in the forwarding plane by the vRouter Agent.

The flow table is programmed to punt packets to the vRouter Agent for which there is no entry in the flow table. This allows the vRouter agent to see the first packet of every new flow.

The vRouter agent will install a flow entry for each new flow and then re-inject the packet into the forwarding plane. OpenContrail supports a high-level policy language that allows virtual networks to be connected, subject to policy constraints.

This policy language is similar to the Snort [snort] rules language [snort-rules-intro] but that may change as the system is extended. The policy rule looks similar to this:. This rule allows all traffic to flow from virtual network src-vn to virtual network dst-vn and forces the traffic through a service chain that consists of service svc-1 followed by service svc In the above example, the rule applies when any virtual machine in virtual network src-vn sends traffic to any virtual machine in virtual network dst-vn.

The system is mostly concerned with traffic steering, i. The system creates additional routing instances for service virtual machines in addition to the routing instances for tenant virtual machines.

The IETF draft [draft-rfernando-virt-topo-bgp-vpn] describes a similar mechanism for service chaining. IF-MAP provides an extensible mechanism for defining data models. It also defines a protocol to publish, subscribe, and search the contents of a data store.

Control nodes can use the subscribe mechanism to only receive the subset of configuration in which they are interested. XMPP was originally named Jabber and was used for instant messaging, presence information, and contact list maintenance. Designed to be extensible, the protocol has since evolved into a general publish-subscribe message bus and is now used in many applications. OpenContrail uses XMPP as a general-purpose message bus between the compute nodes and the control node to exchange multiple types of information including routes, configuration, operational state, statistics, logs and events.

BGP can also be used to exchange routing information between the control nodes and the gateway nodes routers and switches from major networking vendors. Sandesh is a XML based protocol for reporting analytics information. The structure of the XML messages is described in published schemas. Each component on every node has a Sandesh connection to one of the analytics nodes.

Sandesh carries two kinds of messages:. The Nova module in OpenStack instructs the Nova Agent in the compute node to create the virtual machine.

The Nova Agent communicates with the OpenContrail Neutron plug-in to retrieve the network attributes of the new virtual machine e. Once the virtual machine is created, the Nova Agent informs the vRouter agent who configures the virtual network for the newly create virtual machine e.

It can also be used to provide confidentiality although there is typically no need for that in the confines of data center. For the initial service discovery certificates are used for authentication. For all subsequent communications token-based authentication is used for improved performance.

The service discovery server issues the tokens to both the servers and the clients over certificate authenticated TLS connections. The distribution of the certificates is out of scope of this document. In practice this is typically handled by the server management system such as puppet or chef. Servers establish the identity of clients using TLS authentication and assigns the one or more roles. The roles determine what operations the client is allowed to perform over the interface e.

For high availability as well as for horizontal scaling there are multiple instances of the control nodes, the configuration nodes, and the analytics nodes. All nodes are active-active — OpenContrail does not use the active-standby concept. Currently, each control node contains all operational state for the entire the system. For example, all routes for all virtual machines in all virtual networks. The total amount of control state is relatively small to fit comfortably in the memory of each control node.

As more features are added, aggregation and sharding of control state across the Control nodes may be introduced in the future using similar principles as route target specific route reflectors in BGP.

Each vRouter agent connects to two or more control nodes where all nodes are active-active. The vRouter agent receives all its state routes, routing instance configuration, etc. The state received from the two or more nodes is guaranteed to be eventually consistent but may be transiently inconsistent.

It makes a local decision about which copy of the control state to use. If a control node fails, the vRouter agent will notice that the connection to that control node is lost. The vRouter agent will flush all state from the failed control node. It already has a redundant copy of all the state from the other control node. The vRouter can locally and immediately switch over without any need for resynchronization. The vRouter agent will contact the service discovery server again to re-establish a connection with a new control node to replace the failed control node.

The configuration nodes store all configuration state in a fault tolerant and highly available NoSQL database. This includes the contents of the high-level data model, i. And it also includes the contents of the low-level data model, i. The Control nodes use IF-MAP to subscribe to just the part of the low-level data model which is needed for the control plane.

The service discovery server assigns each control node to a particular configuration node. If that configuration server fails, the control Node re-contacts the service discovery server and is assigned to a different configuration server. The analytics nodes are stateless, hence failure of the analytics components do not cause the system to lose messages.

The failed analytics node is taken out of the pool of available nodes and one of the remaining analytics node takes over the work of collecting data and handling queries.

The database cluster is setup in a multiple replication manner, hence the data itself will be resilient to database node failures. Upon failure of a database node, the analytics nodes will smoothly transition from the failed node to a functioning node.

During this process, we will queue up the data and hence during this transition the data loss will be very minimal. VRouter high availability is based on the graceful restart model used by many routing protocols including BGP.

If the vRouter agent restarts for any reason crash, upgrade the vRouter forwarding plane continues to forward traffic using the forwarding state which was installed by the vRouter agent prior to the restart. When the vRouter agent goes down the vRouter forwarding plane is running in a headless mode. All forwarding state is marked as stale. When the vRouter agent restarts, it re-establishes connections to a pair of redundant Control nodes.

It re-learns all state from the Control nodes and re-installs the fresh state in the forwarding plane replacing the stale state. When the vRouter agent finishes re-learning the state from the control nodes and completes re-installing fresh state in the forwarding plane, any remaining stale state in the forwarding plane is flushed.

If the vRouter forwarding plane restarts for any reason crashes, upgrades there will be an interruption in traffic processing for that particular server. This is unavoidable because there is only a single instance of the vRouter forwarding plane on each router. It is important to keep the vRouter forwarding plane as simple as possible to minimize the probability of crashes or upgrades. Underlying all state in the system, whether configuration, operational or analytics, is a set of data models.

Each data model defines a set of objects, their semantics, and the relationships between them. The system operates on these data models to perform its tasks: These data models offer certain capabilities to the modules that manipulate them, and in turn impose certain requirements on them.

The main result of this data model based design is that the system is distributed, scalable, highly available, easily upgradable, and elastic. Data models are essentially annotated graphs with vertices that represent objects and links that represent relationships between objects, and the system uses a Data Modeling Language DML to specify them. Some of the semantics of objects and the relationships between them are captured directly in the data model; for example, a vertex in the graph may represent an abstract or concrete object, and a link may represent a parent-child relationship or a dependency between a pair of objects.

The remaining semantics are captured in the annotations on vertices and links; for example, a link that represents connectivity between a pair of vertices may be annotated with the required bandwidth, or a link between routing instances may be annotated with the desired routing policy.

Architecture

Leave a Reply

Juniper Networks, Support. It is important to keep your products registered and your install base updated. This example shows how to configure a dynamic VPN on a Juniper Networks device to provide VPN access to remote clients. This feature is supported on SRX, SRX, SRX, SRX, and SRXHM devices. A common deployment scenario for dynamic VPN is to provide VPN access to remote clients that are. Juniper Networks NetScreen-ISG (1) Administration Local administrators database 20 External administrator database RADIUS/LDAP/SecurID Restricted .