大样品随机双盲测试_训练和测试样品生成

大样品随机双盲测试

This post aims to explore a step-by-step approach to create a K-Nearest Neighbors Algorithm without the help of any third-party library. In practice, this Algorithm should be useful enough for us to classify our data whenever we have already made classifications (in this case, color), which will serve as a starting point to find neighbors.

这篇文章旨在探索逐步方法,以在无需任何第三方库的帮助下创建K最近邻居算法 。 在实践中,只要我们已经进行了分类(在这种情况下为颜色),该算法就足以对我们进行数据分类,这将成为寻找邻居的起点。

For this post, we will use a specific dataset which can be downloaded here. It contains 539 two dimensional data points, each with a specific color classification. Our goal will be to separate them into two groups (train and test) and try to guess our test sample colors based on our algorithm recommendation.

对于这篇文章,我们将使用一个特定的数据集,可以在此处下载 。 它包含539个二维数据点,每个数据点都有特定的颜色分类。 我们的目标是将它们分为两组(训练和测试),并根据算法建议尝试猜测测试样本的颜色。

训练和测试样品生成 (Train and test sample generation)

We will create two different sample sets:

我们将创建两个不同的样本集:

  • Training Set: This will contain 75% of our working data, selected randomly. This set will be used to generate our model.

    训练集:这将包含我们75%的工作数据,是随机选择的。 该集合将用于生成我们的模型。

  • Test Set: Remaining 25% of our working data will be used to test the out-of-sample accuracy of our model. Once our predictions of this 25% are made, we will check the “percentage of correct classifications” by comparing predictions versus real values.

    测试集:我们剩余的25%的工作数据将用于测试模型的样本外准确性。 一旦做出25%的预测,我们将通过比较预测值与实际值来检查“ 正确分类的百分比 ”。

# Load Data
library(readr)
RGB <- as.data.frame(read_csv("RGB.csv"))
RGB$x <- as.numeric(RGB$x)
RGB$y <- as.numeric(RGB$y)
print("Working data ready")# Training Dataset
smp_siz = floor(0.75*nrow(RGB))
train_ind = sample(seq_len(nrow(RGB)),size = smp_siz)
train =RGB[train_ind,]# Testting Dataset
test=RGB[-train_ind,]
OriginalTest <- test
paste("Training and test sets done")

训练数据 (Training Data)

We can observe that our train data is classified into 3 clusters based on colors.

我们可以观察到,我们的火车数据基于颜色分为3类。

# We plot test colored datapoints
library(ggplot2)
colsdot <- c("Blue" = "blue", "Red" = "darkred", "Green" = "darkgreen")
ggplot() +
geom_tile(data=train,mapping=aes(x, y), alpha=0) +
##Ad tiles according to probabilities
##add points
geom_point(data=train,mapping=aes(x,y, colour=Class),size=3 ) +
scale_color_manual(values=colsdot) +
#add the labels to the plots
xlab('X') + ylab('Y') + ggtitle('Train Data')+
#remove grey border from the tile
scale_x_continuous(expand=c(0,.05))+scale_y_continuous(expand=c(0,.05))
Image for post
Train data: we can observe 3 Classes (Blue, Green, and Red)
训练数据:我们可以观察3个等级(蓝色,绿色和红色)

测试数据 (Test Data)

Even though we know the original color classification of our test data, we will try to create a model that can guess its color based solely on an educated guess. For this, we will remove their original colors and save them only for testing purposes; once our model makes its prediction, we will be able to calculate our Model Accuracy by comparing the original versus our prediction.

即使我们知道测试数据的原始颜色分类,我们也将尝试创建一个仅可以根据有根据的猜测来猜测其颜色的模型。 为此,我们将删除其原始颜色,并仅将其保存以用于测试; 一旦我们的模型做出了预测,我们就可以通过比较原始预测和我们的预测来计算模型精度

# We plot test colored datapoints
colsdot <- c("Blue" = "blue", "Red" = "darkred", "Green" = "darkgreen")
ggplot() +
geom_tile(data=test,mapping=aes(x, y), alpha=0) +
##Ad tiles according to probabilities
##add points
geom_point(data=test,mapping=aes(x,y),size=3 ) +
scale_color_manual(values=colsdot) +
#add the labels to the plots
xlab('X') + ylab('Y') + ggtitle('Test Data')+
#remove grey border from the tile
scale_x_continuous(expand=c(0,.05))+scale_y_continuous(expand=c(0,.05))
Image for post
Test data: we removed and purposely forgot its classification colors to create a model that’s able to guess them.
测试数据:我们删除并故意忘记了其分类颜色,以创建能够猜测它们的模型。

K最近邻居算法 (K-Nearest Neighbors Algorithm)

Below is a step-by-step example of an implementation of this algorithm. What we want to achieve is for each selected gray point above (our test values), where we allegedly do not know their actual color, find the nearest neighbor or nearest colored data point from our train values and assign the same color as this one.

下面是该算法实现的分步示例。 我们想要实现的是针对上面每个选定的灰点(我们的测试值),据称我们不知道它们的实际颜色,从我们的火车值中找到最近的邻居或最接近的彩色数据点,并为其分配相同的颜色。

In particular, we need to:

特别是,我们需要:

  • Normalize data: even though in this case is not needed, since all values are in the same scale (decimals between 0 and 1), it is recommended to normalize in order to have a “standard distance metric”.

    归一化数据:即使在这种情况下不需要,由于所有值都在相同的标度(0到1之间的小数),建议进行归一化以具有“标准距离度量”。

  • Define how we measure distance: We can define the distance between two points in this two-dimensional data set as the Euclidean distance between them. We will calculate L1 (sum of absolute differences) and L2 (sum of squared differences) distances, though final results will be calculated using L2 since its more unforgiving than L1.

    定义测量距离的方式:我们可以将此二维数据集中的两个点之间的距离定义为它们之间的欧式距离。 我们将计算L1(绝对差之和)和L2(平方差之和)的距离,尽管最终结果将使用L2来计算,因为它比L1更加不容忍。

  • Calculate Distances: we need to calculate the distance between each tested data point and every value within our train dataset. Normalization is critical here since, in the case of body structure, a distance in weight (1 KG) and height (1 M) is not comparable. We can anticipate a higher deviation in KG than it is on the Meters, leading to incorrect overall distances.

    计算距离:我们需要计算每个测试数据点与火车数据集中每个值之间的距离。 归一化在这里至关重要,因为在身体结构的情况下,重量(1 KG)和高度(1 M)的距离不可比。 我们可以预期到KG的偏差要比仪表多,从而导致总距离不正确。

  • Sort Distances: Once we calculate the distance between every test and training points, we need to sort them in descending order.

    距离排序:一旦我们计算出每个测试点与训练点之间的距离,就需要对它们进行降序排序。

  • Selecting top K nearest neighbors: We will select the top K nearest train data points to inspect which category (colors) they belong to in order also to assign this category to our tested point. Since we might use multiple neighbors, we might end up with multiple categories, in which case, we should calculate a probability.

    选择最接近的K个最近的邻居:我们将选择最接近的K个火车数据点来检查它们属于哪个类别(颜色),以便将该类别分配给我们的测试点。 由于我们可能使用多个邻居,因此我们可能会得到多个类别,在这种情况下,我们应该计算一个概率。

# We define a function for prediction
KnnL2Prediction <- function(x,y,K) {

# Train data
Train <- train
# This matrix will contain all X,Y values that we want test.
Test <- data.frame(X=x,Y=y)

# Data normalization
Test$X <- (Test$X - min(Train$x))/(min(Train$x) - max(Train$x))
Test$Y <- (Test$Y - min(Train$y))/(min(Train$y) - max(Train$y))
Train$x <- (Train$x - min(Train$x))/(min(Train$x) - max(Train$x))
Train$y <- (Train$y - min(Train$y))/(min(Train$y) - max(Train$y)) # We will calculate L1 and L2 distances between Test and Train values.
VarNum <- ncol(Train)-1
L1 <- 0
L2 <- 0
for (i in 1:VarNum) {
L1 <- L1 + (Train[,i] - Test[,i])
L2 <- L2 + (Train[,i] - Test[,i])^2
}

# We will use L2 Distance
L2 <- sqrt(L2)

# We add labels to distances and sort
Result <- data.frame(Label=Train$Class,L1=L1,L2=L2)

# We sort data based on score
ResultL1 <-Result[order(Result$L1),]
ResultL2 <-Result[order(Result$L2),]

# Return Table of Possible classifications
a <- prop.table(table(head(ResultL2$Label,K)))
b <- as.data.frame(a)
return(as.character(b$Var1[b$Freq == max(b$Freq)]))
}

使用交叉验证找到正确的K参数 (Finding the correct K parameter using Cross-Validation)

For this, we will use a method called “cross-validation”. What this means is that we will make predictions within the training data itself and iterate this on many different values of K for many different folds or permutations of the data. Once we are done, we will average our results and obtain the best K for our “K-Nearest” Neighbors algorithm.

为此,我们将使用一种称为“交叉验证”的方法。 这意味着我们将在训练数据本身中进行预测,并针对数据的许多不同折叠或排列对许多不同的K值进行迭代。 完成之后,我们将平均结果并为“ K最近”邻居算法获得最佳K。

Image for post
# We will use 5 folds
FoldSize = floor(0.2*nrow(train)) # Fold1
piece1 = sample(seq_len(nrow(train)),size = FoldSize )
Fold1 = train[piece1,]
rest = train[-piece1,] # Fold2
piece2 = sample(seq_len(nrow(rest)),size = FoldSize)
Fold2 = rest[piece2,]
rest = rest[-piece2,] # Fold3
piece3 = sample(seq_len(nrow(rest)),size = FoldSize)
Fold3 = rest[piece3,]
rest = rest[-piece3,] # Fold4
piece4 = sample(seq_len(nrow(rest)),size = FoldSize)
Fold4 = rest[piece4,]
rest = rest[-piece4,] # Fold5
Fold5 <- rest# We make folds
Split1_Test <- rbind(Fold1,Fold2,Fold3,Fold4)
Split1_Train <- Fold5Split2_Test <- rbind(Fold1,Fold2,Fold3,Fold5)
Split2_Train <- Fold4Split3_Test <- rbind(Fold1,Fold2,Fold4,Fold5)
Split3_Train <- Fold3Split4_Test <- rbind(Fold1,Fold3,Fold4,Fold5)
Split4_Train <- Fold2Split5_Test <- rbind(Fold2,Fold3,Fold4,Fold5)
Split5_Train <- Fold1# We select best K
OptimumK <- data.frame(K=NA,Accuracy=NA,Fold=NA)
results <- trainfor (i in 1:5) {
if(i == 1) {
train <- Split1_Train
test <- Split1_Test
} else if(i == 2) {
train <- Split2_Train
test <- Split2_Test
} else if(i == 3) {
train <- Split3_Train
test <- Split3_Test
} else if(i == 4) {
train <- Split4_Train
test <- Split4_Test
} else if(i == 5) {
train <- Split5_Train
test <- Split5_Test
}
for(j in 1:20) {
results$Prediction <- mapply(KnnL2Prediction, results$x, results$y,j)
# We calculate accuracy
results$Match <- ifelse(results$Class == results$Prediction, 1, 0)
Accuracy <- round(sum(results$Match)/nrow(results),4)
OptimumK <- rbind(OptimumK,data.frame(K=j,Accuracy=Accuracy,Fold=paste("Fold",i)))

}
}OptimumK <- OptimumK [-1,]
MeanK <- aggregate(Accuracy ~ K, OptimumK, mean)
ggplot() +
geom_point(data=OptimumK,mapping=aes(K,Accuracy, colour=Fold),size=3 ) +
geom_line(aes(K, Accuracy, colour="Moving Average"), linetype="twodash", MeanK) +
scale_x_continuous(breaks=seq(1, max(OptimumK$K), 1))
Image for post
5 folds for 20 different values of K
5折以获得20个不同的K值

As seen in the plot above, we can observe that our algorithm’s prediction accuracy is in the range of 88%-95% for all folds and decreasing from K=3 onwards. We can observe the highest consistent accuracy results on K=1 (3 is also a good alternative).

如上图所示,我们可以观察到我们算法的所有折叠的预测准确度都在88%-95%的范围内,并且从K = 3开始下降。 我们可以在K = 1上观察到最高的一致精度结果(3也是一个很好的选择)。

根据最近的1个邻居进行预测。 (Predicting based on Top 1 Nearest Neighbors.)

模型精度 (Model Accuracy)

# Predictions over our Test sample
test <- OriginalTest
K <- 1
test$Prediction <- mapply(KnnL2Prediction, test$x, test$y,K)
head(test,10)# We calculate accuracy
test$Match <- ifelse(test$Class == test$Prediction, 1, 0)
Accuracy <- round(sum(test$Match)/nrow(test),4)
print(paste("Accuracy of ",Accuracy*100,"%",sep=""))
Image for post
First 10 predictions using K=1
使用K = 1的前10个预测

As seen by the results above, we can expect to “guess the correct class or color” 93% of the time.

从上面的结果可以看出,我们可以期望在93%的时间内“猜测正确的类别或颜色”。

原始色彩 (Original Colors)

Below we can observe the original colors or classes of our test sample.

下面我们可以观察测试样​​品的原始颜色或类别。

ggplot() + 
geom_tile(data=test,mapping=aes(x, y), alpha=0) +
geom_point(data=test,mapping=aes(x,y,colour=Class),size=3 ) +
scale_color_manual(values=colsdot) +
xlab('X') + ylab('Y') + ggtitle('Test Data')+
scale_x_continuous(expand=c(0,.05))+scale_y_continuous(expand=c(0,.05))
Image for post
This is the original color/class of our test samples
这是我们测试样品的原始颜色/类别

预测的颜色 (Predicted Colors)

Using our algorithm, we obtain the following colors for our initially colorless sample dataset.

使用我们的算法,我们为最初的无色样本数据集获得了以下颜色。

ggplot() + 
geom_tile(data=test,mapping=aes(x, y), alpha=0) +
geom_point(data=test,mapping=aes(x,y,colour=Prediction),size=3 ) +
scale_color_manual(values=colsdot) +
xlab('X') + ylab('Y') + ggtitle('Test Data')+
scale_x_continuous(expand=c(0,.05))+scale_y_continuous(expand=c(0,.05))
Image for post
In red circles, we’ve marked the differences or incorrect classifications.
在红色圆圈中,我们标记了差异或错误的分类。

As seen in the plot above, it seems even though our algorithm correctly classified most of the data points, it failed with some of them (marked in red).

如上图所示,即使我们的算法正确地对大多数数据点进行了分类,但其中的某些数据点还是失败了(用红色标记)。

决策极限 (Decision Limits)

Finally, we can visualize our “decision limits” over our original Test Dataset. This provides an excellent visual approximation of how well our model is classifying our data and the limits of its classification space.

最后,我们可以可视化我们原始测试数据集上的“决策限制”。 这为模型对数据的分类及其分类空间的局限性提供了极好的视觉近似。

In simple words, we will simulate 160.000 data points (400x400 matrix) within the range of our original dataset, which, when later plotted, will fill most of the empty spaces with colors. This will help us express in detail how our model would classify this 2D space within it’s learned color classes. The more points we generate, the better our “resolution” will be, much like pixels on a TV.

简而言之,我们将在原始数据集范围内模拟160.000个数据点(400x400矩阵),当稍后绘制时,这些数据点将用颜色填充大部分空白空间。 这将帮助我们详细表达我们的模型如何在其学习的颜色类别中对该2D空间进行分类。 我们生成的点越多,我们的“分辨率”就越好,就像电视上的像素一样。

# We calculate background colors
x_coord = seq(min(train[,1]) - 0.02,max(train[,1]) + 0.02,length.out = 40)
y_coord = seq(min(train[,2]) - 0.02,max(train[,2]) + 0.02, length.out = 40)
coord = expand.grid(x = x_coord, y = y_coord)
coord[['prob']] = mapply(KnnL2Prediction, coord$x, coord$y,K)# We calculate predictions and plot decition area
colsdot <- c("Blue" = "blue", "Red" = "darkred", "Green" = "darkgreen")
colsfill <- c("Blue" = "#aaaaff", "Red" = "#ffaaaa", "Green" = "#aaffaa")
ggplot() +
geom_tile(data=coord,mapping=aes(x, y, fill=prob), alpha=0.8) +
geom_point(data=test,mapping=aes(x,y, colour=Class),size=3 ) +
scale_color_manual(values=colsdot) +
scale_fill_manual(values=colsfill) +
xlab('X') + ylab('Y') + ggtitle('Decision Limits')+
scale_x_continuous(expand=c(0,0))+scale_y_continuous(expand=c(0,0))
Image for post

As seen above, the colored region represents which areas our algorithm would define as a “colored data point”. It is visible why it failed to classify some of them correctly.

如上所示,彩色区域表示我们的算法将哪些区域定义为“彩色数据点”。 可见为什么无法正确分类其中一些。

最后的想法 (Final Thoughts)

K-Nearest Neighbors is a straightforward algorithm that seems to provide excellent results. Even though we can classify items by eye here, this model also works in cases of higher dimensions where we cannot merely observe them by the naked eye. For this to work, we need to have a training dataset with existing classifications, which we will later use to classify data around it, meaning it is a supervised machine learning algorithm.

K最近邻居是一种简单的算法,似乎可以提供出色的结果。 即使我们可以在这里按肉眼对项目进行分类,该模型也可以在无法仅用肉眼观察它们的较高维度的情况下使用。 为此,我们需要有一个带有现有分类的训练数据集,稍后我们将使用它来对周围的数据进行分类 ,这意味着它是一种监督式机器学习算法

Sadly, this method presents difficulties in scenarios such as in the presence of intricate patterns that cannot be represented by simple straight distance, like in the cases of radial or nested clusters. It also has the problem of performance since, for every classification of a new data point, we need to compare it to every single point in our training dataset, which is resource and time intensive since it requires replication and iteration of the complete set.

可悲的是,这种方法在诸如无法以简单直线距离表示的复杂图案的情况下(例如在径向或嵌套簇的情况下)会遇到困难。 它也存在性能问题,因为对于新数据点的每个分类,我们都需要将其与训练数据集中的每个点进行比较,这是资源和时间密集的,因为它需要复制和迭代整个集合。

翻译自: https://towardsdatascience.com/k-nearest-neighbors-classification-from-scratch-6b31751bed9b

大样品随机双盲测试

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/391005.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

JavaScript 基础,登录验证

<script></script>的三种用法&#xff1a;放在<body>中放在<head>中放在外部JS文件中三种输出数据的方式&#xff1a;使用 document.write() 方法将内容写到 HTML 文档中。使用 window.alert() 弹出警告框。使用 innerHTML 写入到 HTML 元素。使用 &qu…

从数据角度探索在新加坡的非法毒品

All things are poisons, for there is nothing without poisonous qualities. It is only the dose which makes a thing poison.” ― Paracelsus万物都是毒药&#xff0c;因为没有毒药就没有什么。 只是使事物中毒的剂量。” ― 寄生虫 执行摘要(又名TL&#xff1b; DR) (Ex…

Android 自定义View实现QQ运动积分抽奖转盘

因为偶尔关注QQ运动&#xff0c; 看到QQ运动的积分抽奖界面比较有意思&#xff0c;所以就尝试用自定义View实现了下&#xff0c;原本想通过开发者选项查看下界面的一些信息&#xff0c;后来发现积分抽奖界面是在WebView中展示的&#xff0c;应该是在H5页面中用js代码实现的&…

瑞立视:厚积薄发且具有“工匠精神”的中国品牌

一家成立两年的公司&#xff1a;是如何在VR行业趋于稳定的情况下首次融资就获得如此大额的金额呢&#xff1f; 2017年VR行业内宣布融资的公司寥寥无几&#xff0c;无论是投资人还是消费者对这个 “宠儿”都开始纷纷投以怀疑的目光。但就在2017年7月27日&#xff0c;深圳市一家…

CSV模块的使用

CSV模块的使用 1、csv简介 CSV (Comma Separated Values)&#xff0c;即逗号分隔值&#xff08;也称字符分隔值&#xff0c;因为分隔符可以不是逗号&#xff09;&#xff0c;是一种常用的文本 格式&#xff0c;用以存储表格数据&#xff0c;包括数字或者字符。很多程序在处理数…

python 重启内核_Python从零开始的内核回归

python 重启内核Every beginner in Machine Learning starts by studying what regression means and how the linear regression algorithm works. In fact, the ease of understanding, explainability and the vast effective real-world use cases of linear regression is…

回归分析中自变量共线性_具有大特征空间的回归分析中的变量选择

回归分析中自变量共线性介绍 (Introduction) Performing multiple regression analysis from a large set of independent variables can be a challenging task. Identifying the best subset of regressors for a model involves optimizing against things like bias, multi…

python 面试问题_值得阅读的30个Python面试问题

python 面试问题Interview questions are quite tricky to predict. In most cases, even peoples with great programming ability fail to answer some simple questions. Solving the problem with your code is not enough. Often, the interviewer will expect you to hav…

机器学习模型 非线性模型_机器学习:通过预测菲亚特500的价格来观察线性模型的工作原理...

机器学习模型 非线性模型Introduction介绍 In this article, I’d like to speak about linear models by introducing you to a real project that I made. The project that you can find in my Github consists of predicting the prices of fiat 500.在本文中&#xff0c;…

10款中小企业必备的开源免费安全工具

10款中小企业必备的开源免费安全工具 secist2017-05-188共527453人围观 &#xff0c;发现 7 个不明物体企业安全工具很多企业特别是一些中小型企业在日常生产中&#xff0c;时常会因为时间、预算、人员配比等问题&#xff0c;而大大减少或降低在安全方面的投入。这时候&#xf…

图片主成分分析后的可视化_主成分分析-可视化

图片主成分分析后的可视化If you have ever taken an online course on Machine Learning, you must have come across Principal Component Analysis for dimensionality reduction, or in simple terms, for compression of data. Guess what, I had taken such courses too …

TP引用样式表和js文件及验证码

TP引用样式表和js文件及验证码 引入样式表和js文件 <script src"__PUBLIC__/bootstrap/js/jquery-1.11.2.min.js"></script> <script src"__PUBLIC__/bootstrap/js/bootstrap.min.js"></script> <link href"__PUBLIC__/bo…

pytorch深度学习_深度学习和PyTorch的推荐系统实施

pytorch深度学习The recommendation is a simple algorithm that works on the principle of data filtering. The algorithm finds a pattern between two users and recommends or provides additional relevant information to a user in choosing a product or services.该…

Java 集合-集合介绍

2017-10-30 00:01:09 一、Java集合的类关系图 二、集合类的概述 集合类出现的原因&#xff1a;面向对象语言对事物的体现都是以对象的形式&#xff0c;所以为了方便对多个对象的操作&#xff0c;Java就提供了集合类。数组和集合类同是容器&#xff0c;有什么不同&#xff1a;数…

Exchange 2016部署实施案例篇-04.Ex基础配置篇(下)

上二篇我们对全新部署完成的Exchange Server做了基础的一些配置&#xff0c;今天继续基础配置这个话题。 DAG配置 先决条件 首先在配置DGA之前我们需要确保DAG成员服务器上磁盘的盘符都是一样的&#xff0c;大小建议最好也相同。 其次我们需要确保有一块网卡用于数据复制使用&…

数据库课程设计结论_结论:

数据库课程设计结论In this article, we will learn about different types[Z Test and t Test] of commonly used Hypothesis Testing.在本文中&#xff0c;我们将学习常用假设检验的不同类型[ Z检验和t检验 ]。 假设是什么&#xff1f; (What is Hypothesis?) This is a St…

配置Java_Home,临时环境变量信息

一、内容回顾 上一篇博客《Java运行环境的搭建---Windows系统》 我们说到了配置path环境变量的目的在于控制台可以在任意路径下都可以找到java的开发工具。 二、配置其他环境变量 1. 原因 为了获取更大的用户群体&#xff0c;所以使用java语言开发系统需要兼容不同版本的jdk&a…

网页缩放与窗口缩放_功能缩放—不同的Scikit-Learn缩放器的效果:深入研究

网页缩放与窗口缩放内部AI (Inside AI) In supervised machine learning, we calculate the value of the output variable by supplying input variable values to an algorithm. Machine learning algorithm relates the input and output variable with a mathematical func…

Python自动化开发01

一、 变量变量命名规则变量名只能是字母、数字或下划线的任意组合变量名的第一个字符不能是数字以下关键字不能声明为变量名 [and, as, assert, break, class, continue, def, del, elif, else, except, exec, finally, for, from, global, if, import, in, is, lambda, not,…

未越狱设备提取数据_从三星设备中提取健康数据

未越狱设备提取数据Health data is collected every time you have your phone in your pocket. Apple or Android, the phones are equipped with a pedometer that counts your steps. Hence, health data is recorded. This data could be your one free data mart for a si…