在测试数据迁移时遇到的错误。
目录
一、错误
二、解决
三、数据迁移测试
3.1 环境
3.2 源码及测试
3.2.1 源码
3.2.2 测试结果(太慢)
3.2.3 源码修改
3.2.4 异常及解决
一、错误
The driver has not received any packets from the server.
二、解决
经排查是因为MySQL版本与mysql-connector-java版本对应不上(本地和远程数据库5,而mysql-connector-java依赖包是8)
<dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>8.0.13</version>
</dependency>
解决:在pom.xml中修改mysql-connector-java的版本为5.1.49即可
<dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>5.1.49</version>
</dependency>
再将驱动类
Class.forName("com.mysql.cj.jdbc.Driver");
修改为
Class.forName("com.mysql.jdbc.Driver");
做如下测试发现的问题。
三、数据迁移测试
3.1 环境
- Maven Project
- JDK 8
- Mac Book pro内存32GB
- 表信息:72382条数据量 ,2.47GB存储空间
3.2 源码及测试
程序说明:把别人的mysql库中数据迁移到本地库中。
为什么要写这样程序?
最简单的办法是用navicat工具复制粘贴即可。
痛点:navicat工具复制粘贴是一条一个数据的插入的,效率很低,搞不好navicat直接就挂了(出现卡死的情况)。数据见下图
找到表所在磁盘(如上图),数据量为72382条,大概为2.47GB
有很多种办法可以解决这个问题,想写个程序来实现这个数据迁移的需求。所以出现了标题的错误提示。
3.2.1 源码
pom.xml配置如下:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>org.example</groupId><artifactId>jdbc2</artifactId><version>1.0-SNAPSHOT</version><properties><maven.compiler.source>8</maven.compiler.source><maven.compiler.target>8</maven.compiler.target></properties><dependencies><!-- MYSQL连接数据 --><dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>5.1.49</version></dependency><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>4.13</version><scope>compile</scope></dependency><dependency><groupId>org.projectlombok</groupId><artifactId>lombok</artifactId><version>1.18.28</version><scope>compile</scope></dependency></dependencies></project>
实现数据迁移源码测试类如下:
package xiao.xian;import org.junit.Before;import java.sql.*;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;/*** 从mysql5.7库迁移到mysql5.7库 <br/>** @author xiaoxian* @date 2023/9/14 16:44:25*/
public class DataTransTest {private Connection conn1 = null;private Statement stmt1 = null;private Connection conn2 = null;private PreparedStatement stmt2 = null;public void init() throws ClassNotFoundException, SQLException {Class.forName("com.mysql.cj.jdbc.Driver");// 建立连接 url格式 - JDBC:子协议:子名称//主机名:端口/数据库名?属性名=属性值&…conn1 = DriverManager.getConnection("jdbc:mysql://192.168.200.119:3306/dayloan_trans_dev?userSSL=true&useUnicode=true&characterEncoding=UTF8&serverTimezone=GMT", "root", "20191224123");stmt1 = conn1.createStatement();conn2 = DriverManager.getConnection("jdbc:mysql://localhost:3306/dayloan_trans_dev?userSSL=true&useUnicode=true&characterEncoding=UTF8&serverTimezone=GMT", "root", "20191224123");stmt2 = conn2.prepareStatement("insert into credit_reporting_query_log2(`id`, `user_id`, `name`, `id_no`, `request`, `request_id`, `message_id`, `query_type`, `query_reason`, `report_type`, `query_org_code`, `response`, `status`, `message`, `create_time`) values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)");}@org.junit.Testpublic void trans() {try {init();// 已知是7.2万条数据,数据大小为2.4GB// 测试10000条数据迁移耗时情况for (int i = 0; i < 2; i++) {// 查5000条数据List<Map<String, Object>> list = loadData(i, 5000);// 5000条提交一次batchInsert(list);}stmt1.close();conn1.close();} catch (Exception se) {// 处理 JDBC 错误se.printStackTrace();} finally {// 关闭资源try {if (stmt1 != null) {stmt1.close();}} catch (SQLException ignored) {}try {if (conn1 != null) {conn1.close();}} catch (Throwable ignore) {}try {if (stmt2 != null) {stmt2.close();}} catch (SQLException ignored) {}try {if (conn2 != null) {conn2.close();}} catch (Throwable ignore) {}}System.out.println("Finish.");}private List<Map<String, Object>> loadData(int pageNum, int size) throws SQLException {String sql = "SELECT * FROM credit_reporting_query_log limit " + (pageNum * size) + "," + size;ResultSet rs = stmt1.executeQuery(sql);List<Map<String, Object>> list = new ArrayList<>(size);// 展开结果集数据库while (rs.next()) {Map<String, Object> map = new HashMap<>(16);map.put("id", rs.getInt("id"));map.put("project_record", rs.getString("project_record"));map.put("borrower_record", rs.getString("borrower_record"));map.put("guaranty_house_record", rs.getString("guaranty_house_record"));map.put("create_time", rs.getDate("create_time"));map.put("update_time", rs.getDate("update_time"));map.put("project_log_id", rs.getString("project_log_id"));list.add(map);}rs.close();return list;}private void batchInsert(List<Map<String, Object>> dataList) throws SQLException {for (Map<String, Object> map : dataList) {stmt2.setString(1, (String) map.get("id"));stmt2.setString(2, (String) map.get("project_record"));stmt2.setString(3, (String) map.get("borrower_record"));stmt2.setString(4, (String) map.get("guaranty_house_record"));stmt2.setDate(5, (Date) map.get("create_time"));stmt2.setDate(6, (Date) map.get("update_time"));stmt2.setString(7, (String) map.get("project_log_id"));stmt2.addBatch();}stmt2.executeBatch();}}
3.2.2 测试结果(太慢)
测试(数据量72382,大概2.47GB)
测试每5000条数据提交一次,提交2次,总共提交了10000条数据,耗时260023毫秒即4分多钟,太慢了。
在磁盘中查看文件大小为331.MB,则每条数据大概33.9kb。
3.2.3 源码修改
package xiao.xian;import java.sql.*;/*** 从mysql5.7库迁移到mysql5.7库 <br/>** @author xiaoxian* @date 2023/9/14 16:44:25*/
public class DataTransTest2 {private Connection conn1 = null;private Statement stmt1 = null;private Connection conn2 = null;private Statement stmt2 = null;public void init() throws ClassNotFoundException, SQLException {Class.forName("com.mysql.jdbc.Driver");// 建立连接 url格式 - JDBC:子协议:子名称//主机名:端口/数据库名?属性名=属性值&…conn1 = DriverManager.getConnection("jdbc:mysql://192.168.200.119:3306/dayloan_trans_dev?userSSL=true&useUnicode=true&characterEncoding=UTF8&serverTimezone=GMT", "root", "20191224123");stmt1 = conn1.createStatement();conn2 = DriverManager.getConnection("jdbc:mysql://localhost:3306/dayloan_trans_dev?userSSL=true&useUnicode=true&characterEncoding=UTF8&serverTimezone=GMT", "root", "20191224123");stmt2 = conn2.createStatement();}@org.junit.Testpublic void trans() {System.out.println(System.currentTimeMillis());long start = 0;try {init();start = System.currentTimeMillis();// 已知是7.2万条数据,数据大小为2.4GB// 测试10000条数据迁移耗时情况for (int i = 0; i < 5; i++) {// 2000条提交一次loadAndInsert(i, 2000);}stmt1.close();conn1.close();} catch (Exception se) {// 处理 JDBC 错误se.printStackTrace();} finally {// 关闭资源try {if (stmt1 != null) {stmt1.close();}} catch (SQLException ignored) {}try {if (conn1 != null) {conn1.close();}} catch (Throwable ignore) {}try {if (stmt2 != null) {stmt2.close();}} catch (SQLException ignored) {}try {if (conn2 != null) {conn2.close();}} catch (Throwable ignore) {}}long end = System.currentTimeMillis();System.out.println("Finish." + (end - start));}private void loadAndInsert(int pageNum, int size) throws SQLException {String sql = "SELECT * FROM project_log_detail limit " + (pageNum * size) + "," + size;ResultSet rs = stmt1.executeQuery(sql);int i = 0;StringBuffer insertSQL = new StringBuffer("INSERT INTO project_log_detail2 VALUES");while (rs.next()) {if (i > 0) {insertSQL.append(",");}i++;insertSQL.append("('"+rs.getString("id")+"'").append(",");insertSQL.append("'"+rs.getString("project_record")+"'").append(",");insertSQL.append("'"+rs.getString("borrower_record")+"'").append(",");insertSQL.append("'"+rs.getString("guaranty_house_record")+"'").append(",");insertSQL.append("'"+rs.getDate("create_time")+"'").append(",");insertSQL.append("'"+rs.getDate("update_time")+"'").append(",");insertSQL.append("'"+rs.getString("project_log_id")+"')");}stmt2.addBatch(insertSQL.toString());//设置为不自动提交conn2.setAutoCommit(false);stmt2.executeBatch();//手动提交conn2.commit();//恢复现场conn2.setAutoCommit(true);}}
数据量72382,大概占磁盘空间2.47GB
数据量占磁盘空间太大,只测试10000条数据的情况。测试结果如下表:(从结果上看,比上一个程序快了一点点)
每次查询 和 提交数据量 | 耗时(毫秒) |
---|---|
1000(总数据量10000) | 144736 |
2000(总数据量10000) | 143752 |
2500(总数据量10000) | 131341 |
3333(总数据量10000) | 140339 |
5000(总数据量10000) | 145205 |
即:在本机的环境下(10000条数据,每条数据33.9kb),每次读写2500条左右的数据速度是最快的。
3.2.4 异常及解决
设置提交为1000条时报如下错误:
java.sql.BatchUpdateException: Packet for query is too large (21342389 > 20971520). You can change this value on the server by setting the max_allowed_packet' variable.
max_allowed_packet是mysql服务器端和客户端在一次传送数据包的过程当中最大允许的数据包大小,默认是20MB。
解决:修改这个值大点即可。
一种是修改my.cnf(windows是my.ini)配置文件修改max_allowed_packet的大小(永久生效)
[mysqld]
max_allowed_packet = 512M
另一种是执行命令修改max_allowed_packet的大小
set global max_allowed_packet = 512 * 1024 * 1024;
命令行修改之后,需要退出当前回话(关闭当前mysql server链接),然后重新登录才能查看修改后的值。通过命令行修改只能临时生效,如果下次数据库重启后对应的配置就会又复原了,因为重启的时候加载的是配置文件里面的配置项
注:max_allowed_packet 最大值是1G(1073741824),如果设置超过1G,查看最终生效结果也只有1G。
PS: 这种迁移方式只是一种尝试和测试,在实际的数据库迁移工作中,不建议这样迁移数据。