深度学习 推理 训练_使用关系推理的自我监督学习进行训练而无需标记数据

深度学习 推理 训练

背景与挑战📋 (Background and challenges 📋)

In a modern deep learning algorithm, the dependence on manual annotation of unlabeled data is one of the major limitations. To train a good model, usually, we have to prepare a vast amount of labeled data. In the case of a small number of classes and data, we can use the pre-trained model from the labeled public dataset and fine-tune a few last layers with your data. However, in real life, it’s easily faced with the problem when your data is considerably large (the products in the store or the face of a human,..) and it will be difficult for the model to learn with just a few trainable layers. Furthermore, the amount of unlabeled data (e.g. document text, images on the Internet) is uncountable. Labeling all of them for the task is almost impossible but not utilizing them is definitely a waste.

在现代深度学习算法中,对未标记数据的手动注释的依赖是主要限制之一。 为了训练一个好的模型,通常,我们必须准备大量的标记数据。 对于少量的类和数据,我们可以使用带有标签的公共数据集中的预训练模型,并使用您的数据微调最后几层。 但是,在现实生活中,当您的数据相当大时(存储在商店中的产品或人的脸,..),很容易遇到问题,并且模型很难通过几个可训练的层来学习。 此外,未标记数据(例如,文档文本,Internet上的图像)的数量是不可数的。 为任务标记所有标签几乎是不可能的,但是不使用它们绝对是浪费。

In this case, training a deep model again from scratch with a new dataset will be an option but it takes a lot of time and effort for labeling data while using a pre-trained deep model seems no longer helpful. That is the reason why Self-supervised learning was born. The idea behind this is simple, which serves two main tasks:

在这种情况下,使用新数据集从头开始重新训练深度模型将是一种选择,但使用预先训练的深度模型似乎不再有用,但是花费大量时间和精力来标记数据。 这就是自我监督学习诞生的原因。 这背后的想法很简单,它有两个主要任务:

  • Surrogate task: the deep model will learn generalizable representations from unlabeled data without annotation, and then will be able to self-generate a supervisory signal exploiting implicit information.

    替代任务:深度模型将从未标记的数据中学习可概括的表示形式,而无需注释,然后将能够利用隐式信息自行生成监控信号。

  • Downstream task: representations will be fine-tuned for supervised-learning tasks e.g. classification and image retrieval with less number of labeled data (the number of labeled data depending on the performance of model based on your requirement)

    下游任务:将对有监督学习任务的表示进行微调 例如,分类和图像检索使用较少的标记数据(标记数据的数量取决于您的需求取决于模型的性能)

There are much different training approaches proposed to learn such representations: Relative position [1]: the model needs to understand the spatial context of objects to tell the relative position between parts; Jigsaw puzzle [2]: the model needs to place 9 shuffled patches back to the original locations; Colorization [3]: the model has trained to color a grayscale input image; precisely the task is to map this image to a distribution over quantized color value outputs; Counting features [4]: The model learns a feature encoder using feature counting relationship of input images transforming by Scaling and Tiling; SimCLR [5]: The model learns representations for visual inputs by maximizing agreement between differently augmented views of the same sample via a contrastive loss in the latent space.

提出了许多不同的训练方法来学习此类表示形式: 相对位置[1]: 模型需要了解对象的空间背景,以告诉零件之间的相对位置; 拼图游戏[2]:模型需要将9个打乱的补丁放回到原始位置; 着色[3]:模型已训练为对灰度输入图像进行着色; 确切的任务是将该图像映射到量化的颜色值输出上的分布; 计数特征[4]:模型使用通过缩放转换的输入图像的特征计数关系学习特征编码器 平铺; SimCLR [5]:模型通过潜在空间中的对比损失来最大化同一样本的不同增强视图之间的一致性,从而学习视觉输入的表示形式。

However, I would like to introduce one interesting approach that is able to recognize things like a human. The key factor in human learning is the acquisition of new knowledge by comparing relating and different entities. So, it is a nontrivial solution if we can apply a similar mechanism in self-supervised machine learning via the Relational reasoning approach [6].

但是,我想介绍一种有趣的方法,它能够识别像人类一样的东西。 人类学习的关键因素是通过比较相关实体和不同实体来获取新知识。 因此,如果我们可以通过关系推理方法在自我监督的机器学习中应用类似的机制,那将是一个不平凡的解决方案[6]。

The relational reasoning paradigm is based on a key design principle: the use of a relation network as a learnable function on the unlabeled dataset to quantify the relationships between views of the same object (intra-reasoning) and relationships between different objects in different scenes (inter-reasoning). The possibility to exploit a similar mechanism in self-supervised machine learning via relational reasoning was evaluated by the performance on standard datasets (CIFAR-10, CIFAR-100, CIFAR-100–20, STL-10, tiny-ImageNet, SlimageNet), learning schedule, and backbones (bothshallow and deep). The results show that the Relational reasoning approach largely outperforms the best competitor in all conditions by an average 14% accuracy and the most recent state-of-the-art method by 3% indicating in this paper [6].

关系推理范例基于一个关键设计原则:使用关系网络作为未标记数据集上的可学习功能,以量化同一对象的视图之间的关系(内部推理)和不同场景中不同对象之间的关系(相互解释)。 通过标准数据集(CIFAR-10,CIFAR-100,CIFAR-100–20,STL-10,tiny-ImageNet,SlimageNet)的性能评估了通过关系推理在自我监督机器学习中采用类似机制的可能性,学习时间表和骨干(浅层和深层)。 结果表明, 关系推理方法在所有条件下的表现均优于最佳竞争者,平均准确度为14%,而最新技术水平的表现为3%,表明了本文的研究 [6]。

技术亮点📄 (Technique highlight 📄)

Image for post
Relational definition example (Image by author)
关系定义示例(作者提供的图片)

For the simplest explanation, Relational Reasoning is just a methodology that tries to help learners understanding relations between different objects (ideas) rather than learning objects individually. That could help learners easily distinguish and remember the object based on their difference. There are two main components in the Relational reasoning system [6]: backbone structure and relation head. The relation head was used in the pretext task phase for supporting the underlying neural network backbone learning useful representations in the unlabeled dataset and then it will be discarded. The backbone structure was used in downstream tasks such as classification or image retrieval after training in the pretext task.

对于最简单的解释, 关系推理只是一种试图帮助学习者理解不同对象(思想)之间关系的方法,而不是单独学习对象。 这可以帮助学习者根据他们的差异轻松区分和记住对象。 关系推理系统[6]有两个主要组成部分:主干 结构关系头关系头前置任务阶段中用于支持基础神经网络主干学习未标记数据集中的有用表示形式,然后将其丢弃。 在对前言任务进行培训之后,将主干结构用于下游任务,例如分类或图像检索。

  • Previous work: focus on within-scene relation, meaning that

    先前的工作:专注于场景内关系,这意味着

    all the elements in the same object belong to the same scene (e.g. balls from a basket); training on label dataset and the main goal is the relation head [7].

    同一对象中的所有元素都属于同一场景(例如,篮子中的球); 在标签数据集上进行训练,主要目标是关系头[7]。

  • New approach: focus on relations between different views of the same object (intra-reasoning) and between different objects in different scenes (inter-reasoning); use relational reasoning on unlabeled data and the relation head is a pretext task for learning useful representations in the underlying backbone.

    新方法:关注同一对象的不同视图之间的关系( 推理 )和不同场景中的不同对象之间的关系( 推理 ); 在未标记的数据上使用关系推理,并且关系头是用于学习基础骨干中有用表示的一个前置任务。

Let’s discuss the important point in some part of the Relational reasoning system:

让我们讨论关系推理系统某些部分的重点

  1. Mini-batch augmentation:

    小批量增强:

As mentioned before, this system introduced intra-reasoning and inter-reasoning? So why we need them? It is not possible to create pairs of similar and dissimilar objects when labels are not given. To solve this problem, the bootstrapping technique was applied and resulted in forming intra-reasoning and inter-reasoning, where:

如前所述,该系统引入了内部推理内部推理 ? 那么为什么我们需要它们? 如果未指定标签,则无法创建相似和不相似的对象对。 为了解决这个问题,应用了引导技术,并导致形成内部推理内部推理,其中

  • Intra-reasoning consists of sampling random augmentations of the same object {A1; A2 } (positive pair) (eg. different views of same basketball)

    内部推理由对同一对象{A1; A2}(正对)(例如,同一篮球的不同观点)

  • Inter-reasoning consists of coupling two random objects {A1; B1} (negative pair) (eg. basketball with random ball)

    相互推理包括耦合两个随机对象{A1; B1}(负对)(例如带随机球的篮球)

Furthermore, the utilization of the random augmentations functions (e.g. geometric transformation, color distortion) is also considered to make between-scenes reasoning more complicated. The benefit of these augmentations functions forces the learner (backbone) to pay attention to the correlation between a wider set of features (e.g. color, size, texture, etc.). For instance, in the pair {foot ball, basket ball}, the color alone is a strong predictor of the class. However, with the random changing of color as well as the shape size, the learner now is difficult to discriminate the difference between this pair. The learner has to take a look at another feature, consequently, it results in better representation.

此外,还考虑利用随机增强函数(例如几何变换,颜色失真)使场景间推理更加复杂。 这些增强功能的好处迫使学习者(骨干)注意更广泛的功能(例如颜色,大小,纹理等)之间的相关性。 例如,在{脚球,篮子球}对中,单独的颜色是该类别的有力预测指标。 然而,随着颜色和形状大小的随机变化,学习者现在难以区分这对之间的差异。 学习者必须看一下另一个功能,因此,它可以提供更好的表示。

2. Metric learning

2.公制学习

The aim of metric learning s to use a distance metric to bring closer representations of similar inputs (positives) while moving away representations of dissimilar inputs (negatives). However, in Relational reasoning, metric learning is fundamentally different:

度量学习的目的是使用距离度量在不相似输入(负)的表示移开的同时,更接近相似输入(正)的表示。 但是,在关系推理中,度量学习本质上是不同的:

Image for post
Comparison of the metric learning and relational reasoning
度量学习与关系推理的比较
Image for post
Learning metric — Contrastive loss (Image by author)
学习指标—对比损失(作者提供的图片)
Image for post
Learning metric — Relation score (Image by author)
学习指标-关系得分(作者提供的图片)

3. Loss function

3.损失功能

The learning objective is a binary classification problem over the presentation pairs. Therefore we can use a binary cross-entropy loss to the maximization of a Bernoulli log-likelihood, where the relation score y represents a probabilistic estimate of representation membership inducing through a sigmoid activation function.

学习目标是表示对上的二进制分类问题。 因此,我们可以使用二元交叉熵损失来最大化伯努利对数似然,其中关系得分y表示通过S型激活函数引起的表示成员资格的概率估计。

Image for post
Loss function
损失函数

Finally, this paper [6] also supplied the result of Relational reasoning on standard datasets (CIFAR-10, CIFAR-100, CIFAR-100–20, STL-10, tiny-ImageNet, SlimageNet), different backbones (shallow and deep), same learning schedule (epochs). The results are below, for further information you can check out his paper.

最后,本文[6]还提供了标准数据集(CIFAR-10,CIFAR-100,CIFAR-100-20,STL-10,tiny-ImageNet,SlimageNet),不同主干(浅层和深层)的关系推理结果。 ,相同的学习时间表(时期)。 结果如下,有关更多信息,请查看他的论文。

实验评估📊 (Experimental evaluation 📊)

In this article, I want to reproduce the Relational reasoning system on the public image dataset STL-10. This dataset comprises of 10 classes (airplane, bird, automobile, cat, deer, dog, horse, monkey, ship, truck) with 96x96 pixels color.

在本文中,我想在公共图像数据集STL-10上重现关系推理系统 。 该数据集包含10个类别(飞机,鸟类,汽车,猫,鹿,狗,马,猴子,船,卡车),颜色为96x96像素。

First of all, we need to import some important library

首先,我们需要导入一些重要的库

STL-10 dataset consists of 1300 labeled images (500 for training and 800 for testing). However, it also includes 100000 unlabeled images from a similar but broader distribution of images. For instance, it contains other types of animals (bears, rabbits, etc.) and vehicles (trains, buses, etc.) in addition to the ones in the labeled set

STL-10数据集包含1300个带标签的图像(用于训练的500个图像和用于测试的800个图像)。 但是,它也包括来自相似但分布较广的图像的100000张未标记图像。 例如,除了标记集中的动物外,它还包含其他类型的动物(熊,兔子等)和车辆(火车,公共汽车等)

Image for post
STL-10 dataset
STL-10数据集

And then we will create the Relational reasoning class based on the suggestion of the author

然后根据作者的建议创建关系推理类

To compare the performance of Relational reasoning methodology on the shallow and deep model, we will create a shallow model (Conv4) and use the structure of a deep model (Resnet34).

为了比较关系推理方法在浅层模型和深层模型上的性能,我们将创建一个浅层模型(Conv4),并使用深层模型(Resnet34)的结构。

backbone = Conv4() # shallow model
backbone = models.resnet34(pretrained = False) # deep model

Some hyperparameters and augmentation strategies were set based on the suggestion of the author. We will train our backbone with relation head on unlabeled STL-10 dataset.

根据作者的建议设置了一些超参数和扩充策略。 我们将在未标记的STL-10数据集中训练带有关系头的主干。

Image for post
Sample of images of each batch (K =16)
每个批次的图像样本(K = 16)

Up to now, we’ve already created everything necessary to train our model. Now we will train the backbone and relation head model in 10 epochs and 16 augmentation images (K), it took 4 hours with the shallow model (Conv4) and 6 hours on the deep model (Resnet34) by 1 GPU Tesla P100-PCIE-16GB (you can freely change the number of epochs as well as another hyperparameter to obtain better results)

到目前为止,我们已经创建了训练模型所需的一切。 现在,我们将在10个时期和16个增强图像(K)中训练主干和关系头模型,其中1个GPU Tesla P100-PCIE-在浅层模型(Conv4)和深层模型(Resnet34)上花费了4个小时, 16GB(您可以自由更改时期数以及另一个超参数以获得更好的结果)

device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")backbone.to(device)
model = RelationalReasoning(backbone, feature_size)
model.train(tot_epochs=tot_epochs, train_loader=train_loader)
torch.save(model.backbone.state_dict(), 'model.tar')

After training our backbone model, we discard the relation head and use only the backbone for the downstream tasks. We need to fine-tune our backbone with labeled data in STL-10 (500 images) and test the final model in the test set (800 images). Training and testing datasets will load in Dataloader without augmentations.

训练完主干模型后,我们将丢弃关系头,而仅将主干用于下游任务。 我们需要使用STL-10中的标记数据(500张图像)对主干进行微调,并在测试集中测试最终模型(800张图像)。 训练和测试数据集将在不进行增强的情况下加载到Dataloader中。

We will load the pretrained backbone model and use a simple linear model to connect the output feature with a number of classes in the dataset.

我们将加载预训练的骨干模型,并使用简单的线性模型将输出要素与数据集中的多个类连接。

# linear model
linear_layer = torch.nn.Linear(64, 10) # if backbone is Conv4
linear_layer = torch.nn.Linear(1000, 10) # if backbone is Resnet34# defining a raw backbone model
backbone_lineval = Conv4() # Conv4backbone_lineval = models.resnet34(pretrained = False) # Resnet34# load model
checkpoint = torch.load('model.tar') # name of pretrain weight
backbone_lineval.load_state_dict(checkpoint)

In this time, only the linear model will be trained, the backbone model will be frozen. First, we will see the result of fine-tuned Conv4

此时,仅线性模型将被训练,主干模型将被冻结。 首先,我们将看到经过微调的Conv4的结果

And then check on the test set

然后检查测试集

Conv4 obtained 49.98% accuracy on the test set, it means that the backbone model could learn useful feature in the unlabeled dataset, we just need to fine-tune with few epochs to achieve a good result. Now let check the performance of the deep model.

Conv4在测试集上获得了49.98%的准确性,这意味着主干模型可以在未标记的数据集中学习有用的功能,我们只需要经过几个时期就可以进行微调以取得良好的结果。 现在让我们检查深度模型的性能。

Then evaluating on the test dataset

然后评估测试数据集

It’s much better, we can obtain 55.38% accuracy on the test set. In this article, the main goal is to reproduce and evaluate the Relational reasoning methodology to teach the model distinguishing the object without the label, therefore, these results were very promising. If you feel unsatisfied, you can freely do the experiment by changing the hyperparameter such as the number of augmentation, epochs, or model structure.

更好,我们可以在测试集上获得55.38%的精度。 在本文中,主要目标是重现和评估关系推理方法,以讲授区分没有标签的对象的模型,因此,这些结果非常有希望。 如果您不满意,则可以通过更改超参数(例如扩充的数量,历元或模型结构)自由地进行实验。

最后的想法📕 (Final Thoughts 📕)

Self-supervised relational reasoning is effective in both a quantitative and qualitative manners, and with backbones of different size from shallow to deep structure. Representations learned through comparison can be easily transferred from one domain to another, they are fine-grained and compact, which may be due to the correlation between accuracy and number of augmentations. In relational reasoning, the number of augmentations has a primary role affecting the quality of the clusters of objects based on the author’s experiment [4]. Self-supervised learning has a strong potential to become the future of machine learning in many aspects.

自我监督的关系推理在定量和定性方面都是有效的,并且具有从浅到深结构不同大小的主干。 通过比较学习的表示形式可以轻松地从一个域转移到另一个域,它们的粒度细密,这可能是由于准确性和扩充数量之间的相关性所致。 在关系推理中,基于作者的实验[4],扩充的数量起着主要作用,影响对象簇的质量。 自我监督学习在许多方面都具有成为机器学习未来的强大潜力。

You can contact me if you want further discussion. Here is my Linkedin

如果您想进一步讨论,可以与我联系。 这是我的Linkedin

Enjoy!!! 👦🏻

请享用!!! 👦🏻

翻译自: https://towardsdatascience.com/train-without-labeling-data-using-self-supervised-learning-by-relational-reasoning-b0298ad818f9

深度学习 推理 训练

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/388940.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

CentOS 7 使用 ACL 设置文件权限

Linux 系统标准的 ugo/rwx 集合并不允许为不同的用户配置不同的权限,所以 ACL 便被引入了进来,为的是为文件和目录定义更加详细的访问权限,而不仅仅是这些特别指定的特定权限。 ACL 可以为每个用户,每个组或不在文件所属组中的用…

机器学习实践五---支持向量机(SVM)

之前已经学到了很多监督学习算法, 今天的监督学习算法是支持向量机,与逻辑回归和神经网络算法相比,它在学习复杂的非线性方程时提供了一种更为清晰,更强大的方式。 Support Vector Machines SVM hypothesis Example Dataset 1…

服务器安装mysql_阿里云服务器上安装MySQL

关闭防火墙和selinuxCentOS7以下:service iptables stopsetenforce 0CentOS7.xsystemctl stop firewalldsystemctl disable firewalldsystemctl status firewalldvi /etc/selinux/config把SELINUXenforcing 改成 SELINUXdisabled一、安装依赖库yum -y install make …

在PyTorch中转换数据

In continuation of my previous post ,we will keep on deep diving into basic fundamentals of PyTorch. In this post we will discuss about ways to transform data in PyTorch.延续我以前的 发布后 ,我们将继续深入研究PyTorch的基本原理。 在这篇文章中&a…

机器学习实践六---K-means聚类算法 和 主成分分析(PCA)

在这次练习中将实现K-means 聚类算法并应用它压缩图片,第二部分,将使用主成分分析算法去找到一个脸部图片的低维描述。 K-means Clustering Implementing K-means K-means算法是一种自动将相似的数据样本聚在一起的方法,K-means背后的直观是一个迭代过…

打包 压缩 命令tar zip

2019独角兽企业重金招聘Python工程师标准>>> 打包 压缩 命令tar zip tar语法 #压缩 tar -czvf ***.tar.gz tar -cjvf ***.tar.bz2 #解压缩 tar -xzvf ***.tar.gz tar -xjvf ***.tar.bz2 tar [主选项辅选项] 文件或目录 主选项是必须要有的,它告诉tar要做…

mysql免安装5.7.17_mysql免安装5.7.17数据库配置

首先要有 mysql-5.7.10-winx64环境: mysql-5.7.10-winx64 win10(64位)配置环境变量:1、把mysql-5.7.10-winx64放到D盘,进入D\mysql-5.7.10-winx64\bin目录,复制路径,配置环境变量,在path后面添加D\mysql-5.7.10-winx6…

tidb数据库_异构数据库复制到TiDB

tidb数据库This article is based on a talk given by Tianshuang Qin at TiDB DevCon 2020.本文基于Tianshuang Qin在 TiDB DevCon 2020 上的演讲 。 When we convert from a standalone system to a distributed one, one of the challenges is migrating the database. We’…

机器学习实践七----异常检测和推荐系统

Anomaly detection 异常检测是机器学习中比较常见的应用,它主要用于非监督学习问题,从某些角度看, 它又类似于一些监督学习问题。 什么是异常检测?来看几个例子: 例1. 假设是飞机引擎制造商, 要对引擎进行…

CODE[VS] 1621 混合牛奶 USACO

题目描述 Description牛奶包装是一个如此低利润的生意,所以尽可能低的控制初级产品(牛奶)的价格变的十分重要.请帮助快乐的牛奶制造者(Merry Milk Makers)以可能的最廉价的方式取得他们所需的牛奶.快乐的牛奶制造公司从一些农民那购买牛奶,每个农民卖给牛奶制造公司的价格不一定…

刚认识女孩说不要浪费时间_不要浪费时间寻找学习数据科学的最佳方法

刚认识女孩说不要浪费时间重点 (Top highlight)Data science train is moving, at a constantly accelerating speed, and increasing its length by adding up new coaches. Businesses want to be on the data science train to keep up with the ever-evolving technology a…

测试工具之badboy

badboy这个工具本身用处不是很大,但有个录制脚本的功能,还是jmeter脚本,所以针对这一点很多懒人就可以通过这个录制脚本,而不需要自己去编写 badboy工具最近还是2016年更新的,后面也没在更新了,官方下载地址…

hive 集成sentry

2019独角兽企业重金招聘Python工程师标准>>> 环境 apache-hive-2.3.3-bin apache-sentry-2.1.0-bin 1 2 sentry是目前最新的版本,支持hive的最高版本为2.3.3,hive版本如果高于2.3.3,会出一些版本兼容问题[亲测] hive快速安装 wget…

isql 测试mysql连接_[libco] 协程库学习,测试连接 mysql

历史原因,一直使用 libev 作为服务底层;异步框架虽然性能比较高,但新人学习和使用门槛非常高,而且串行的逻辑被打散为状态机,这也会严重影响生产效率。用同步方式实现异步功能,既保证了异步性能优势&#x…

什么是数据仓库,何时以及为什么要考虑一个

The term “Data Warehouse” is widely used in the data analytics world, however, it’s quite common for people who are new with data analytics to ask the above question.术语“数据仓库”在数据分析领域中被广泛使用,但是,对于数据分析新手来…

探索性数据分析入门_入门指南:R中的探索性数据分析

探索性数据分析入门When I started on my journey to learn data science, I read through multiple articles that stressed the importance of understanding your data. It didn’t make sense to me. I was naive enough to think that we are handed over data which we p…

python web应用_为您的应用选择最佳的Python Web爬网库

python web应用Living in today’s world, we are surrounded by different data all around us. The ability to collect and use this data in our projects is a must-have skill for every data scientist.生活在当今世界中,我们周围遍布着不同的数据。 在我们的…

NDK-r14b + FFmpeg-release-3.4 linux下编译FFmpeg

下载资源 官网下载完NDK14b 和 FFmpeg 下载之后,更改FFmpeg 目录下configure问价如下: SLIBNAME_WITH_MAJOR$(SLIBPREF)$(FULLNAME)-$(LIBMAJOR)$(SLIBSUF) LIB_INSTALL_EXTRA_CMD$$(RANLIB)"$(LIBDIR)/$(LIBNAME)" SLIB_INSTALL_NAME$(SLI…

html中列表导航怎么和图片对齐_HTML实战篇:html仿百度首页

本篇文章主要给大家介绍一下如何使用htmlcss来制作百度首页页面。1)制作页面所用的知识点我们首先来分析一下百度首页的页面效果图百度首页由头部的一个文字导航,中间的一个按钮和一个输入框以及下边的文字简介和导航组成。我们这里主要用到的知识点就是列表标签(ul…

C# 依赖注入那些事儿

原文地址:http://www.cnblogs.com/leoo2sk/archive/2009/06/17/1504693.html 里面有一个例子差了些代码,补全后贴上。 3.1.3 依赖获取 using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml;//定义…