mxnet教程

官方教程,讲的还行,我用自己的实例讲解。自己如何设计网络,自己的迭代器

1:引入module:

import mxnet as mx
import numpy as np
import cv2
import matplotlib.pyplot as plt
import logginglogger = logging.getLogger()
logger.setLevel(logging.DEBUG)

2:创建网络:

# Variables are place holders for input arrays. We give each variable a unique name.
data = mx.symbol.Variable('data')# The input is fed to a fully connected layer that computes Y=WX+b.
# This is the main computation module in the network.
# Each layer also needs an unique name. We'll talk more about naming in the next section.
fc1  = mx.symbol.FullyConnected(data = data, name='fc1', num_hidden=128)
# Activation layers apply a non-linear function on the previous layer's output.
# Here we use Rectified Linear Unit (ReLU) that computes Y = max(X, 0).
act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu")fc2  = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 64)
act2 = mx.symbol.Activation(data = fc2, name='relu2', act_type="relu")fc3  = mx.symbol.FullyConnected(data = act2, name='fc3', num_hidden=10)
# Finally we have a loss layer that compares the network's output with label and generates gradient signals.
mlp  = mx.symbol.SoftmaxOutput(data = fc3, name = 'softmax')

3:显示网络:

mx.viz.plot_network(mlp)

不过这个在spyder上无法显示,所以本人使用这个,会在运行目录下创建jpg的图:

mx.viz.plot_network(mlp).view()  

4:加载数据:

由于官方mxnet只用mnist数据来测试,所以:

又由于data很难下下来,所以在example目录下新建data文件夹,在data文件夹中创建mldata文件夹,再放入从github上下载的original_mnist.mat文件

from sklearn.datasets import fetch_mldata
import os,sys
curr_path = sys.path[0]
sys.path = [os.path.join("/home/hu/mxnet-master/example/autoencoder")] + sys.path
import data
X,Y=data.get_mnist()for i in range(10):plt.subplot(1,10,i+1)plt.imshow(X[i].reshape((28,28)), cmap='Greys_r')plt.axis('off')
plt.show()X = X.astype(np.float32)/255
X_train = X[:60000]
X_test = X[60000:]
Y_train = Y[:60000]
Y_test = Y[60000:]

5:设置数据迭代器:

mxnet这个数据迭代器创建可以自己写函数,网上可以查得到,mxnet工作其实就是数据一块一块的迭代

batch_size = 100
train_iter = mx.io.NDArrayIter(X_train, Y_train, batch_size=batch_size)
test_iter = mx.io.NDArrayIter(X_test, Y_test, batch_size=batch_size)

6:训练:

网上看到,好像不要这样去训练,因为这样的话,你能够调试的东西就少了

model = mx.model.FeedForward(ctx = mx.gpu(0),      # Run on GPU 0symbol = mlp,         # Use the network we just definednum_epoch = 10,       # Train for 10 epochslearning_rate = 0.1,  # Learning ratemomentum = 0.9,       # Momentum for SGD with momentumwd = 0.00001)         # Weight decay for regularization
model.fit(X=train_iter,  # Training data seteval_data=test_iter,  # Testing data set. MXNet computes scores on test set every epochbatch_end_callback = mx.callback.Speedometer(batch_size, 200))  # Logging module to print out progress

第二种:

先把数据放入显存,初始化参数,然后在训练(貌似,用这个准确率更高?)

data = mx.symbol.Variable('data')
fc1  = mx.symbol.FullyConnected(data, name='fc1', num_hidden=128)
act1 = mx.symbol.Activation(fc1, name='relu1', act_type="relu")
fc2  = mx.symbol.FullyConnected(act1, name = 'fc2', num_hidden = 64)
act2 = mx.symbol.Activation(fc2, name='relu2', act_type="relu")
fc3  = mx.symbol.FullyConnected(act2, name='fc3', num_hidden=10)
out  = mx.symbol.SoftmaxOutput(fc3, name = 'softmax')
# construct the module
mod = mx.mod.Module(out,context=mx.gpu())   
mod.bind(data_shapes=train_iter.provide_data,label_shapes=train_iter.provide_label)
mod.init_params()
mod.fit(train_iter, eval_data=test_iter,optimizer_params={'learning_rate':0.01, 'momentum': 0.9},num_epoch=10)

 

7:用训练好的模型进行来预测:

plt.imshow((X_test[0].reshape((28,28))*255).astype(np.uint8), cmap='Greys_r')
plt.show()
print 'Result:', model.predict(X_test[0:1])[0].argmax()

8:有模型评估函数:

print 'Accuracy:', model.score(test_iter)*100, '%'

9:弄成网页调用函数:

# run hand drawing test
from IPython.display import HTMLdef classify(img):img = img[len('data:image/png;base64,'):].decode('base64')img = cv2.imdecode(np.fromstring(img, np.uint8), -1)img = cv2.resize(img[:,:,3], (28,28))img = img.astype(np.float32).reshape((1, 784))/255.0return model.predict(img)[0].argmax()html = """<style type="text/css">canvas { border: 1px solid black; }</style><div id="board"><canvas id="myCanvas" width="100px" height="100px">Sorry, your browser doesn't support canvas technology.</canvas><p><button id="classify" οnclick="classify()">Classify</button><button id="clear" οnclick="myClear()">Clear</button>Result: <input type="text" id="result_output" size="5" value=""></p></div>"""
script = """<script type="text/JavaScript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js?ver=1.4.2"></script><script type="text/javascript">function init() {var myCanvas = document.getElementById("myCanvas");var curColor = $('#selectColor option:selected').val();if(myCanvas){var isDown = false;var ctx = myCanvas.getContext("2d");var canvasX, canvasY;ctx.lineWidth = 5;$(myCanvas).mousedown(function(e){isDown = true;ctx.beginPath();var parentOffset = $(this).parent().offset(); canvasX = e.pageX - parentOffset.left;canvasY = e.pageY - parentOffset.top;ctx.moveTo(canvasX, canvasY);}).mousemove(function(e){if(isDown != false) {var parentOffset = $(this).parent().offset(); canvasX = e.pageX - parentOffset.left;canvasY = e.pageY - parentOffset.top;ctx.lineTo(canvasX, canvasY);ctx.strokeStyle = curColor;ctx.stroke();}}).mouseup(function(e){isDown = false;ctx.closePath();});}$('#selectColor').change(function () {curColor = $('#selectColor option:selected').val();});}init();function handle_output(out) {document.getElementById("result_output").value = out.content.data["text/plain"];}function classify() {var kernel = IPython.notebook.kernel;var myCanvas = document.getElementById("myCanvas");data = myCanvas.toDataURL('image/png');document.getElementById("result_output").value = "";kernel.execute("classify('" + data +"')",  { 'iopub' : {'output' : handle_output}}, {silent:false});}function myClear() {var myCanvas = document.getElementById("myCanvas");myCanvas.getContext("2d").clearRect(0, 0, myCanvas.width, myCanvas.height);}</script>"""
HTML(html+script)

10:输出权重:

def norm_stat(d):"""The statistics you want to see.We compute the L2 norm here but you can change it to anything you like."""return mx.nd.norm(d)/np.sqrt(d.size)
mon = mx.mon.Monitor(100,                 # Print every 100 batchesnorm_stat,           # The statistics function defined abovepattern='.*weight',  # A regular expression. Only arrays with name matching this pattern will be included.sort=True)           # Sort output by name
model = mx.model.FeedForward(ctx = mx.gpu(0), symbol = mlp, num_epoch = 1,learning_rate = 0.1, momentum = 0.9, wd = 0.00001)
model.fit(X=train_iter, eval_data=test_iter, monitor=mon,  # Set the monitor herebatch_end_callback = mx.callback.Speedometer(100, 100))

11:就像之前所说的,数据ilter是能够自己写loop来聚类的

但说实话,自己写的loop如何调用gpu?作者的自己写的例子,也没有调用gpu,我实在是怀疑

epoch迭代次数,ilter是分的数据patch的个数

一般来说没必要写自己的loop,所以不怎么推荐使用

# ==================Binding=====================
# The symbol we created is only a graph description.
# To run it, we first need to allocate memory and create an executor by 'binding' it.
# In order to bind a symbol, we need at least two pieces of information: context and input shapes.
# Context specifies which device the executor runs on, e.g. cpu, GPU0, GPU1, etc.
# Input shapes define the executor's input array dimensions.
# MXNet then run automatic shape inference to determine the dimensions of intermediate and output arrays.# data iterators defines shapes of its output with provide_data and provide_label property.
input_shapes = dict(train_iter.provide_data+train_iter.provide_label)
print 'input_shapes', input_shapes
# We use simple_bind to let MXNet allocate memory for us.
# You can also allocate memory youself and use bind to pass it to MXNet.
exe = mlp.simple_bind(ctx=mx.gpu(0), **input_shapes)# ===============Initialization=================
# First we get handle to input arrays
arg_arrays = dict(zip(mlp.list_arguments(), exe.arg_arrays))
data = arg_arrays[train_iter.provide_data[0][0]]
label = arg_arrays[train_iter.provide_label[0][0]]# We initialize the weights with uniform distribution on (-0.01, 0.01).
init = mx.init.Uniform(scale=0.01)
for name, arr in arg_arrays.items():if name not in input_shapes:init(name, arr)# We also need to create an optimizer for updating weights
opt = mx.optimizer.SGD(learning_rate=0.1,momentum=0.9,wd=0.00001,rescale_grad=1.0/train_iter.batch_size)
updater = mx.optimizer.get_updater(opt)# Finally we need a metric to print out training progress
metric = mx.metric.Accuracy()# Training loop begines
for epoch in range(10):train_iter.reset()metric.reset()t = 0for batch in train_iter:# Copy data to executor input. Note the [:].data[:] = batch.data[0]label[:] = batch.label[0]# Forwardexe.forward(is_train=True)# You perform operations on exe.outputs here if you need to.# For example, you can stack a CRF on top of a neural network.# Backward
        exe.backward()# Updatefor i, pair in enumerate(zip(exe.arg_arrays, exe.grad_arrays)):weight, grad = pairupdater(i, grad, weight)metric.update(batch.label, exe.outputs)t += 1if t % 100 == 0:print 'epoch:', epoch, 'iter:', t, 'metric:', metric.get()

12:新的层

输入的数据,输出数据个数都要好好申明

# Define custom softmax operator
class NumpySoftmax(mx.operator.NumpyOp):def __init__(self):# Call the parent class constructor. # Because NumpySoftmax is a loss layer, it doesn't need gradient input from layers above.super(NumpySoftmax, self).__init__(need_top_grad=False)def list_arguments(self):# Define the input to NumpySoftmax.return ['data', 'label']def list_outputs(self):# Define the output.return ['output']def infer_shape(self, in_shape):# Calculate the dimensions of the output (and missing inputs) from (some) input shapes.data_shape = in_shape[0]  # shape of first argument 'data'label_shape = (in_shape[0][0],)  # 'label' should be one dimensional and has batch_size instances.output_shape = in_shape[0] # 'output' dimension is the same as the input.return [data_shape, label_shape], [output_shape]def forward(self, in_data, out_data):x = in_data[0]  # 'data'y = out_data[0]  # 'output'# Compute softmaxy[:] = np.exp(x - x.max(axis=1).reshape((x.shape[0], 1)))y /= y.sum(axis=1).reshape((x.shape[0], 1))def backward(self, out_grad, in_data, out_data, in_grad):l = in_data[1]  # 'label'l = l.reshape((l.size,)).astype(np.int)  # cast to inty = out_data[0]  # 'output'dx = in_grad[0]  # gradient for 'data'# Compute gradientdx[:] = ydx[np.arange(l.shape[0]), l] -= 1.0numpy_softmax = NumpySoftmax()data = mx.symbol.Variable('data')
fc1 = mx.symbol.FullyConnected(data = data, name='fc1', num_hidden=128)
act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu")
fc2 = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 64)
act2 = mx.symbol.Activation(data = fc2, name='relu2', act_type="relu")
fc3 = mx.symbol.FullyConnected(data = act2, name='fc3', num_hidden=10)
# Use the new operator we just defined instead of the standard softmax operator.
mlp = numpy_softmax(data=fc3, name = 'softmax')model = mx.model.FeedForward(ctx = mx.gpu(0), symbol = mlp, num_epoch = 2,learning_rate = 0.1, momentum = 0.9, wd = 0.00001)
model.fit(X=train_iter, eval_data=test_iter,batch_end_callback = mx.callback.Speedometer(100, 100))

13:新层加新的迭代:

我创建在example/mytest文件夹下面

#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Thu Mar 30 15:35:02 2017@author: root
"""
from __future__ import print_function
import sys
import os
# code to automatically download dataset
curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
sys.path = [os.path.join(curr_path, "../autoencoder")] + sys.path
import mxnet as mx
import numpy as np
import data
from scipy.spatial.distance import cdist
from sklearn.cluster import KMeans
import model
from autoencoder import AutoEncoderModel
from solver import Solver, Monitor
import logging
import time
global YT
import scipy.io as sio  
import matplotlib.pyplot as plt 
# ==================start setting My-layer=====================
class NumpySoftmax(mx.operator.NumpyOp):def __init__(self):# Call the parent class constructor. # Because NumpySoftmax is a loss layer, it doesn't need gradient input from layers above.super(NumpySoftmax, self).__init__(need_top_grad=False)def list_arguments(self):# Define the input to NumpySoftmax.return ['data', 'label']def list_outputs(self):# Define the output.return ['output']def infer_shape(self, in_shape):# Calculate the dimensions of the output (and missing inputs) from (some) input shapes.data_shape = in_shape[0]  # shape of first argument 'data'label_shape = (in_shape[0][0],)  # 'label' should be one dimensional and has batch_size instances.output_shape = in_shape[0] # 'output' dimension is the same as the input.return [data_shape, label_shape], [output_shape]def forward(self, in_data, out_data):alpha=1.0z = in_data[0]q= out_data[0]  # 'output'kmeans = KMeans(n_clusters=10, random_state=170).fit(z)mu=kmeans.cluster_centers_        # Compute softmaxmask = 1.0/(1.0+cdist(z, mu)**2/alpha)q[:] = mask**((alpha+1.0)/2.0)q[:] = (q.T/q.sum(axis=1)).Tdef backward(self, out_grad, in_data, out_data, in_grad):alpha=1.0x = in_data[0]  # 'label'y = out_data[0]  # 'output'dx = in_grad[0]  # gradient for 'data'kmeans = KMeans(n_clusters=10, random_state=170).fit(x)mu=kmeans.cluster_centers_ mask = 1.0/(1.0+cdist(x, mu)**2/alpha)p = mask**((alpha+1.0)/2.0)mask*= (alpha+1.0)/alpha*(p-y)dx[:] = (x.T*mask.sum(axis=1)).T - mask.dot(mu)
#======================end setting==========================
# ==================start of the process of data=====================
X, Y = data.get_mnist()
X_train = X[:60000]
X_test = X[60000:]
Y_train = Y[:60000]
Y_test = Y[60000:]
numpy_softmax = NumpySoftmax()
batch_size = 100
#the office code to create iter
train_iter = mx.io.NDArrayIter(X_train, Y_train, batch_size=batch_size)
test_iter = mx.io.NDArrayIter(X_test, Y_test, batch_size=batch_size)
input_shapes = dict(train_iter.provide_data+train_iter.provide_label)
# ==================end of the process=====================
# ==================start of setting the net=====================
data = mx.symbol.Variable('data')
fc1 = mx.symbol.FullyConnected(data = data, name='fc1', num_hidden=128)
act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu")
fc2 = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 64)
act2 = mx.symbol.Activation(data = fc2, name='relu2', act_type="relu")
fc3 = mx.symbol.FullyConnected(data = act2, name='fc3', num_hidden=10)
mlp = numpy_softmax(data=fc3, name = 'softmax')
mx.viz.plot_network(mlp).view()  
# ==================start of setting the net=====================
exe = mlp.simple_bind(ctx=mx.gpu(0), **input_shapes)
# ===============Initialization=================
# First we get handle to input arrays
arg_arrays = dict(zip(mlp.list_arguments(), exe.arg_arrays))
data = arg_arrays[train_iter.provide_data[0][0]]
label = arg_arrays[train_iter.provide_label[0][0]]# We initialize the weights with uniform distribution on (-0.01, 0.01).
init = mx.init.Uniform(scale=0.01)
for name, arr in arg_arrays.items():if name not in input_shapes:init(name, arr)# We also need to create an optimizer for updating weights
opt = mx.optimizer.SGD(learning_rate=0.1,momentum=0.9,wd=0.00001,rescale_grad=1.0/train_iter.batch_size)
updater = mx.optimizer.get_updater(opt)# Finally we need a metric to print out training progress
metric = mx.metric.Accuracy()# Training loop begines
for epoch in range(10):train_iter.reset()metric.reset()t = 0for batch in train_iter:# Copy data to executor input. Note the [:].data[:] = batch.data[0]label[:] = batch.label[0]# Forwardexe.forward(is_train=True)# You perform operations on exe.outputs here if you need to.# For example, you can stack a CRF on top of a neural network.# Backward
        exe.backward()# Updatefor i, pair in enumerate(zip(exe.arg_arrays, exe.grad_arrays)):weight, grad = pairupdater(i, grad, weight)metric.update(batch.label, exe.outputs)t += 1if t % 100 == 0:print('epoch:', epoch, 'iter:', t, 'metric:', metric.get())

 

转载于:https://www.cnblogs.com/kangronghu/p/mxnet.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/393336.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

web动画_Web动画简介

web动画by CodeDraken由CodeDraken Web动画简介 (An Introduction to Web Animations) In this introduction to web animations article, we will cover basic CSS animations using pseudo-classes, transitions, and transformations.在此Web动画简介中&#xff0c;我们将介…

java统计空间占用_JVM —— Java 对象占用空间大小计算

引用类型(reference type&#xff1a; Integer)在 32 位系统上每一个占用 4bytes(即32bit&#xff0c; 才干管理 2^324G 的内存), 在 64 位系统上每一个占用 8bytes(开启压缩为 4 bytes)。四. 对齐填充HotSpot 的对齐方式为 8 字节对齐。不足的须要 Padding 填充对齐&#xff0…

源于十年来的点滴积累——《变革中的思索》印行出版

源于归国十年来的点滴积累, 集结成书的《变革中的思索》&#xff0c;日前由电子工业出版社刊印出版。 这本书共有五个章节&#xff0c;分别是解码创新、中国智造、管理心得、我和微软、心灵记忆——前三章偏重技术&#xff0c;更多理性的思考; 后两章则工作生活中的所见所闻&am…

SpringBoot声明式事务

目录 事务的基本特征隔离级别传播行为Transcation事务的基本特征&#xff08;ACID&#xff09; Atomic&#xff08;原子性&#xff09; 事务中包含的操作被看作一个整体的业务单元&#xff0c;这个业务单元中的操作要么全部成功&#xff0c;要么全部失败&#xff0c;不会出现部…

leetcode1437. 是否所有 1 都至少相隔 k 个元素

给你一个由若干 0 和 1 组成的数组 nums 以及整数 k。如果所有 1 都至少相隔 k 个元素&#xff0c;则返回 True &#xff1b;否则&#xff0c;返回 False 。 示例 1&#xff1a; 输入&#xff1a;nums [1,0,0,0,1,0,0,1], k 2 输出&#xff1a;true 解释&#xff1a;每个 1 …

数据结构教程网盘链接_数据结构101:链接列表

数据结构教程网盘链接by Kevin Turney凯文特尼(Kevin Turney) Like stacks and queues, Linked Lists are a form of a sequential collection. It does not have to be in order. A Linked list is made up of independent nodes that may contain any type of data. Each no…

多线程之间的通信(等待唤醒机制、Lock 及其它线程的方法)

一、多线程之间的通信。 就是多个线程在操作同一份数据&#xff0c; 但是操作的方法不同。     如&#xff1a; 对于同一个存储块&#xff0c;其中有两个存储位&#xff1a;name sex&#xff0c; 现有两个线程&#xff0c;一个向其中存放数据&#xff0c;一个打印其中的数…

Linux iptables 配置详解

一、配置一个filter表的防火墙 1. 查看本机关于 iptables 的设置情况 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) …

06 Nginx

1.检查linux上是否通过yum安装了nginx rpm -qi nginx2.解决安装nginx所依赖包 yum install gcc patch libffi-devel python-devel zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel ope…

java编写安卓程序代码,安卓:从Android的Java源代码code创建UML

i am looking for a program that can create automatically an Uml from my Java-Android source code.I have tested ArgoUml, but it does not support Android.Have any one a suggestion?Thanks!解决方案I can second what Tom Morris wrote in the comment above. Even …

leetcode1052. 爱生气的书店老板(滑动窗口)

今天&#xff0c;书店老板有一家店打算试营业 customers.length 分钟。每分钟都有一些顾客&#xff08;customers[i]&#xff09;会进入书店&#xff0c;所有这些顾客都会在那一分钟结束后离开。 在某些时候&#xff0c;书店老板会生气。 如果书店老板在第 i 分钟生气&#xf…

amazon alexa_在Amazon Alexa上推出freeCodeCamp编码琐事测验

amazon alexaNow you can learn coding concepts hands-free using an Amazon Echo.现在&#xff0c;您可以使用Amazon Echo免提学习编码概念。 freeCodeCamp.org contributor David Jolliffe created a quiz game with questions on JavaScript, CSS, networking, and comput…

第一类第二类丢失更新

第一类丢失更新 A事务撤销时&#xff0c;把已经提交的B事务的更新数据覆盖了。这种错误可能造成很严重的问题&#xff0c;通过下面的账户取款转账就可以看出来&#xff1a; 时间 取款事务A 转账事务B T1 开始事务 T2 开始事务 T3 查询账户余额为1000元 …

oracle数据字典表与视图

oracle数据字典表与视图 数据字典是数据的数据&#xff0c;也就是元数据。描述了数据库的物理与逻辑存储与相应的信息。模式中对象的定义信息&#xff0c;安全信息&#xff0c;完整性约束信息&#xff0c;和部分的性能监控信息等。数据字典表 与视图存储在system表空间中的。有…

团队作业——项目Alpha版本发布

---恢复内容开始--- https://edu.cnblogs.com/campus/xnsy/SoftwareEngineeringClass1 https://edu.cnblogs.com/campus/xnsy/SoftwareEngineeringClass1/homework/3329 <作业要求的链接> Gorious Computer <写上团队名称> 发布项目α版本&#xff0c;对项目…

java脏字过滤_脏字过滤

1.[文件]SensitiveWordFilter.java ~ 7KB下载(141)package com.forgov.sharpc.infrastruture.util;import static java.util.Collections.sort;import java.util.ArrayList;import java.util.Collection;import java.util.Comparator;import java.util.HashSet;import java.uti…

react中使用构建缓存_完整的React课程:如何使用React构建聊天室应用

react中使用构建缓存In this video course, youll learn React by building a chat room app.在本视频课程中&#xff0c;您将通过构建聊天室应用程序来学习React。 By the end of the video, youll have a solid understanding of React.js and have your very own chat room…

leetcode1509. 三次操作后最大值与最小值的最小差

给你一个数组 nums &#xff0c;每次操作你可以选择 nums 中的任意一个元素并将它改成任意值。 请你返回三次操作后&#xff0c; nums 中最大值与最小值的差的最小值。 示例 1&#xff1a; 输入&#xff1a;nums [5,3,2,4] 输出&#xff1a;0 解释&#xff1a;将数组 [5,3,…

MySQL异步复制

准备&#xff1a;主备库版本一致&#xff0c;正常安装软件。 1、主库上设置一个复制使用的账户&#xff1a; mysql> grant replication slave on *.* to rep1192.168.100.136 identified by dbking; Query OK, 0 rows affected (0.18 sec) mysql> select user,host,passw…

开源一个爬取redmine数据的测试报告系统

背景 软件测试的最后有一道比较繁琐的工作&#xff0c;就是编写测试报告。手写测试报告在数据统计和分析上面要耗费比较大的事件和精力。之前工作室使用mantis管理bug缺陷。公司有内部有个系统&#xff0c;可以直接从mantis上面获取数据并进行统计&#xff0c;生成一份测试报告…