Examining Open vSwitch Traffic Patterns

In this post, I want to provide some additional insight on how the use of Open vSwitch (OVS) affects—or doesn’t affect, in some cases—how a Linux host directs traffic through physical interfaces, OVS internal interfaces, and OVS bridges. This is something that I had a hard time understanding as I started exploring more advanced OVS configurations, and hopefully the information I share here will be helpful to others.

To help structure this discussion, I’m going to walk through a few different OVS configurations and scenarios. In these scenarios, I’ll use the following assumptions:

· The physical host has four interfaces (eth0, eth1, eth2, and eth3)

· The host is running Linux with KVM, libvirt, and OVS installed

Scenario 1: Simple OVS Configuration

In this first scenario let’s look at a relatively simple OVS configuration, and examine how Linux host and guest domain traffic moves into or out of the network.

Let’s assume that our OVS configuration looks something like this (this is the output from ovs-vsctl show):

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd

    Bridge "br0"

        Port "br0"

            Interface "br0"

            type: internal

        Port "eth0"

            Interface "eth0"

    Bridge "br1"

        Port "br1"

            Interface "br1"

            type: internal

        Port "eth1"

            Interface "eth1"

ovs_version: "1.7.1"

image  

This is a pretty simple configuration; there are two bridges, each with a single physical interface. Let’s further assume, for the purposes of this scenario, that eth2 has an IP address and is working properly to communicate with other hosts on the network. The eth3 interface is shutdown.

So, in this scenario, how does traffic move into or out of the host?

1. Traffic from a guest domain: Traffic from a guest domain will travel through the OVS bridge to which it is attached (you’d see an additional “vnet0″ port and interface appear on that bridge when you start the guest domain). So, a guest domain attached to br0 would communicate via eth0, and a guest domain attached to br1 would communicate via eth1. No real surprises here.

2. Traffic from the Linux host: Traffic from the Linux host itself will notcommunicate over any of the configured OVS bridges, but will instead use its native TCP/IP stack and any configured interfaces. Thus, since eth2 is configured and operational, all traffic to/from the Linux host itself will travel through eth2.

The interesting point (to me, at least) about #2 above is that this includes traffic from the OVS process itself. In other words, if the OVS process(es) need to communicate across the network, they won’t use the bridges—they’ll use whatever interfaces the Linux host uses to communicate. This is one thing that threw me off: because OVS is itself a Linux process, when OVS needs to communicate across the network it will use the Linux network stack to do so. In this scenario, then, OVS would not communicate over any configured bridge, but instead using eth2. (This makes perfect sense now, but I recall that it didn’t earlier. Maybe it’s just me.)

Scenario 2: Adding Bonding

In this second scenario, our OVS configuration changes only slightly:

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd

Bridge "br0"

    Port "br0"

        Interface "br0"

        type: internal

    Port "bond0"

        Interface "eth0"

        Interface "eth1"

ovs_version: "1.7.1"

image

In this case, we’re now leveraging a bond that contains two physical interfaces (eth0 and eth1). (By the way, I have a write-up on configuring OVS and bonds, if you need/want more information.) The eth2 interface still has an IP address assigned and is up and communicating properly. The physical eth3 interface is shutdown.

How does this affect the way in which traffic is handled? It doesn’t, really.Traffic from guest domains will still travel across br0 (since this is the only configured OVS bridge), and traffic from the Linux host—including traffic from OVS itself—will still use whatever interfaces are determined by the host’s TCP/IP stack. In this case, that would be eth2.

Scenario 3: The Isolated Bridge

Let’s look at another OVS configuration, the so-called “isolated bridge”. This is a configuration that is commonly found in implementations using NVP, OpenStack, and others, and it’s a configuration that I recently discussed in my post on GRE tunnels and OVS.

Here’s the configuration:

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd

Bridge "br0"

    Port "br0"

        Interface "br0"

        type: internal

    Port "bond0"

        Interface "eth0"

        Interface "eth1"

Bridge "br-int"

    Port "br-int"

        Interface "br-int"

        type: internal

    Port "gre0"

        Interface "gre0"

        type: gre

        options: {remote_ip="192.168.1.100"}

ovs_version: "1.7.1"

image

As with previous configurations, we’ll assume that eth2 is up and operational, and eth3 is shutdown. So how does traffic get directed in this configuration?

1. Traffic from guest domains attached to br0: This is as before—traffic will go out one of the physical interfaces in the bond, according to the bonding configuration (active-standby, LACP, etc.). Nothing unusual here.

2. Traffic from the Linux host: As before, traffic from processes on the Linux host will travel out according to the host’s TCP/IP stack. There are no changes from previous configurations.

3. Traffic from guest domains attached to br-int: Now, this is where it gets interesting. Guest domains attached to br-int (named “br-int” because in this configuration the isolated bridge is often called the “integration bridge”) don’t have any physical interfaces they can use; they can only use the GRE tunnel. Here’s the “gotcha”, so to speak: the GRE tunnel is created and maintained by the OVS process, and therefore it uses the host’s TCP/IP stack to communicate across the network. Thus, traffic from guest domains attached to br-int would hit the GRE tunnel, which would travel through eth2.

I’ll give you a second to let that sink in.

Ready now? Good! The key to understanding #3 is, in my opinion, understanding that the tunnel (a GRE tunnel in this case, but the same would apply to a VXLAN or STT tunnel) is created and maintained by the OVS process.Thus, because it is created and maintained by a process on the Linux host (OVS itself), the traffic for the tunnel is directed according to the host’s TCP/IP stack and IP routing table(s). In this configuration, the tunnels don’t travel through any of the configured OVS bridges.

Scenario 4: Leveraging an OVS Internal Interface

Let’s keep ramping up the complexity. For this scenario, we’ll use an OVS configuration that is the same as in the previous scenario:

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd

Bridge "br0"

    Port "br0"

        Interface "br0"

        type: internal

    Port "bond0"

        Interface "eth0"

        Interface "eth1"

Bridge "br-int"

    Port "br-int"

        Interface "br-int"

        type: internal

    Port "gre0"

        Interface "gre0"

        type: gre

options: {remote_ip="192.168.1.100"}

ovs_version: "1.7.1"

image

The difference, this time, is that we’ll assume that eth2 and eth3 are both shutdown. Instead, we’ve assigned an IP address to the br0 interface on bridge br0. OVS internal interfaces, like br0, can appear as “physical” interfaces to the Linux host, and therefore can be assigned IP addresses and used for communication. This is the approach I used in describing how to run host management across OVS.

Here’s how this configuration affects traffic flow:

1. Traffic from guest domains attached to br0: No change here. Traffic from guest domains attached to br0 will continue to travel across the physical interfaces in the bond (eth0 and eth1, in this case).

2. Traffic from the Linux host: This time, the only interface that the Linux host has is the br0 internal interface. The br0 internal interface is attached to br0, so all traffic from the Linux host will travel across the physical interfaces attached to the bond (again, eth0 and eth1).

3. Traffic from guest domains attached to br-int: Because Linux host traffic is directed through br0 by virtue of using the br0 internal interface, this means that tunnel traffic is also directed through br0, as dictated by the Linux host’s TCP/IP stack and IP routing table(s).

As you can see, assigning an IP address to an OVS internal interface has a real impact on the way in which the Linux host directs traffic through OVS. This has both positive and negative impacts:

· One positive impact is that it allows for Linux host traffic (such as management or tunnel traffic) to take advantage of OVS bonds, thus gaining some level of NIC redundancy.

· A negative impact is that OVS is now “in band,” so upgrades to OVS will be disruptive to all traffic moving through OVS—which could potentially include host management traffic.

Let’s take a look at one final scenario.

Scenario 5: Using Multiple Bridges and Internal Interfaces

In this configuration, we’ll use an OVS configuration that is very similar to the configuration I showed in my post on GRE tunnels with OVS:

bc6b9e64-11d6-415f-a82b-5d8a61ed3fbd

Bridge "br0"

    Port "br0"

        Interface "br0"

        type: internal

    Port "mgmt0"

        Interface "mgmt0"

        type: internal

    Port "bond0"

        Interface "eth0"

        Interface "eth1"

Bridge "br1"

    Port "br1"

        Interface "br1"

        type: internal

    Port "tep0"

        Interface "tep0"

        type: internal

    Port "bond1"

        Interface "eth2"

        Interface "eth3"

Bridge "br-int"

    Port "br-int"

        Interface "br-int"

        type: internal

    Port "gre0"

        Interface "gre0"

        type: gre

        options: {remote_ip="192.168.1.100"}

ovs_version: "1.7.1"

image

In this configuration, we have three bridges. br0 uses a bond that contains eth0 and eth1; br1 uses a bond that contains eth2 and eth3; and br-int is an isolated bridge with no physical interfaces. We have two “custom” internal interfaces, mgmt0 (on br0) and tep0 (on br1), to which IP addresses have been assigned and which are successfully communicating across the network. We’ll assume that mgmt0 and tep0 are on different subnets, and that tep0 is assigned to the 192.168.1.0/24 subnet.

How does traffic flow in this scenario?

1. Traffic from guest domains attached to br0: The behavior here is as it has been in previous configurations—guest domains attached to br0 will communicate across the physical interfaces in the bond.

2. Traffic from the Linux host: As it has been in previous scenarios, traffic from the Linux host is driven by the host’s TCP/IP stack and IP routing table(s). Because mgmt0 and tep0 are on different subnets, traffic from the Linux host will go out either br0 (for traffic moving through mgmt0) or br1 (for traffic moving through tep0), and thus will utilize the corresponding physical interfaces in the bonds on those bridges.

3. Traffic from guest domains attached to br-int: Because the GRE tunnel is on the 192.168.1.0/24 subnet, traffic for the GRE tunnel—which is created and maintained by the OVS process on the Linux host itself—will travel through tep0, which is attached to br1. Thus, the physical interfaces eth2 and eth3 would be leveraged for the GRE tunnel traffic.

Summary

The key takeaway from this post, in my mind, is understanding where traffic originates, and separating the idea of OVS as a switching mechanism (to handle guest domain traffic) as well as a Linux host process itself (to create and maintain tunnels between hosts).

Hopefully this information is helpful. I am, of course, completely open to your comments, questions, and corrections, so feel free to speak up in the comments below. Courteous comments are always welcome!

http://blog.scottlowe.org/tags/#OVS

转载于:https://www.cnblogs.com/CasonChan/p/4754578.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/419638.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Docker 面临的安全隐患,我们该如何应对

【编者按】对比虚拟机,Docker 在体量等方面拥有显著的优势。然而,当 DevOps 享受 Docker 带来扩展性、资源利用率和弹性提升的同时,其所面临的安全隐患同样值得重视,近日 Chris Taschner 在 SEI 上撰文进行了总结。本文系 OneAPM …

oracle虑重语句,db基本语句(oracle)

一. SQL分类DDL:数据定义语言(Data Definition Language)DML:数据操纵语言(Data Manipulation Language)TCL:事务控制语言(Transaction Control Language)DQL:数据查询语言(Data Query Language)DCL:数据控制语言(Data…

Plist文件和字典转模型

模型与字典 1. 用模型取代字典的好处 使用字典的坏处 编译器没有自动提醒的功能,需要手敲key如果写错了编译器也不会报错2. 模型概念 概念 专门用来存放数据的对象特点 一般继承自NSObject在.h文件中声明一些用来存放数据的属性注释 //单行注释/ /多行注释/* /文档注释,调用属性…

oracle job 与存储过程,应用oracle job和存储过程

每月新增数据百万多条,需要定期处理2个主要数据表(test_ad,test_pd),移动非当月数据到历史表中保存数据操作存储过程如下:MYPROC.prccreate or replace procedure MYPROC isTableName_AD char(13);TableName_PD char(13);tmp_str varchar2(10…

Oracle从小白到大牛的刷题之路(建议收藏学习)

目录 前言 数据表结构 数据库文件(按照顺序导入) 1基本SQL-SELECT 1.1基本SQL-SELECT语句笔记 1.2 基本SQL-SELECT语句练习 2过滤和排序数据 2.1过滤和排序数据笔记 2.2过滤和排序数据练习 3单行函数 3.1单行函数笔记 3.2单行函数练习 4多表…

oracle数据库快照打点,Oracle数据库快照的使用

Oracle数据库快照的使用正在看的ORACLE教程是:Oracle数据库快照的使用。oracle数据库的快照是一个表,它包含有对一个本地或远程数据库上一个或多个表或视图的查询的结果。正因为快照是一个主表的查询子集,使用快照可以加快数据的查询速度;在保持不同数据…

3.2 双向链表

1.简介 前面3.1的单链表在操作过程中有一个缺点,就是后面的节点无法直接找到前面的节点,这使很多操作都得从头到尾去搜寻节点,算法效率变得非常低,解决这个问题的方法就是重新定义链表的节点使每个节点有两个指针,一个…

uc通讯不成功php版本过高,Ucenter通信失败排查方法

定位错误来源:1. 使用firebug或类似于firebug的工具审查”通信失败“这几个字2. 会发现包含这几个字的div的同级下方有个script标签,复制该script标签的src值到浏览器的新标签页并打开3. 这个url指向的是ucenter中app模块的onping操作(ucenter/control/a…

马丁 福勒 Martin Fowler 关于依赖注入和反转控制的区别

马丁 福勒 Martin Fowler 关于依赖注入和反转控制的区别 http://martinfowler.com/articles/injection.html 中文翻译:http://files.cnblogs.com/files/stono/DependencyInjection.pdf 转载于:https://www.cnblogs.com/stono/p/4764551.html

oracle工作流错误,工作流错误处理 - Oracle® ZFS Storage Appliance 客户服务手册

工作流错误处理如果在执行工作流期间发生错误,则会引发异常。如果异常未由工作流自身所捕获(或者如果工作流引发的异常未以其他方式捕获),则工作流将失败,并向用户显示有关异常的信息。要正确处理错误,应该捕获并处理异常。例如&a…

敏捷软件开发:原则、模式与实践——第12章 ISP:接口隔离原则

第12章 ISP:接口隔离原则 不应该强迫客户程序依赖并未使用的方法。   这个原则用来处理“胖”接口所存在的缺点。如果类的接口不是内敛的,就表示该类具有“胖”接口。换句话说,类的“胖”接口可以分解成多组方法。每一组方法都服务于一组不…

pxe安装linux后命令不可用,pxe自动安装linux

配置自动安装操作系统1.网卡应支持pxe技术,由网卡作为dhcp的客户端向dhcp服务器请求一个IP地址,dhcp会将ip,网关等信息和的tftp服务器的地址应加载的文件名提供给客户端2.根据dhcp服务器提供的信息网卡上内置的tftp客户端向tftp服务器发出请求…

Java中常用的集合

有序列允许元素重复否Collection否是List是是SetAbstractSet否      否HashSetTreeSet是(用二叉树排序)MapAbstractMap否 使用key-value来映射和存储数据, Key必须惟一,value可以重复 HashMapTreeMap是(用二叉树…