python机器学习预测_使用Python和机器学习预测未来的股市趋势

python机器学习预测

Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details.

Towards Data Science编辑的注意事项: 尽管我们允许独立作者按照我们的 规则和指南 发表文章 ,但我们不认可每位作者的贡献。 您不应在未征求专业意见的情况下依赖作者的作品。 有关 详细信息, 请参见我们的 阅读器条款

With the recent volatility of the stock market due to the COVID-19 pandemic, I thought it was a good idea to try and utilize machine learning to predict the near-future trends of the stock market. I’m fairly new to machine learning, and this is my first Medium article so I thought this would be a good project to start off with and showcase.

鉴于最近由于COVID-19大流行而导致的股市波动,我认为尝试利用机器学习来预测股市的近期趋势是一个好主意。 我是机器学习的新手,这是我的第一篇中型文章,所以我认为这是一个很好的项目,首先要进行展示。

This article tackles different topics concerning data science, namely; data collection and cleaning, feature engineering, as well as the creation of machine learning models to make predictions.

本文讨论了与数据科学有关的不同主题,即: 数据收集和清理,功能工程以及创建机器学习模型以进行预测。

Author’s disclaimer: This project is not financial or investment advice. It is not a guarantee that it will provide the correct results most of the time. Therefore you should be very careful and not use this as a primary source of trading insight.

作者免责声明:该项目不是财务或投资建议。 不能保证大部分时间都会提供正确的结果。 因此,您应该非常小心,不要将其用作交易洞察力的主要来源。

You can find all the code on a jupyter notebook on my github:

您可以在我的github上的jupyter笔记本上找到所有代码:

1.进口和数据收集 (1. Imports and Data Collection)

To begin, we include all of the libraries used for this project. I used the yfinance API to gather all of the historical stock market data. It’s taken directly from the yahoo finance website, so it’s very reliable data.

首先,我们包括用于该项目的所有库。 我使用yfinance API收集了所有历史股票市场数据。 它直接来自雅虎财经网站,因此它是非常可靠的数据。

import yfinance as yf
import datetime
import pandas as pd
import numpy as np
from finta import TA
import matplotlib.pyplot as pltfrom sklearn import svm
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import confusion_matrix, classification_report
from sklearn import metrics

We then define some constants that used in data retrieval and data processing. The list with the indicator symbols is useful to help use produce more features for our model.

然后,我们定义一些用于数据检索和数据处理的常量。 带有指示器符号的列表对于帮助使用为我们的模型产生更多功能很有用。

"""
Defining some constants for data mining
"""NUM_DAYS = 10000     # The number of days of historical data to retrieve
INTERVAL = '1d'     # Sample rate of historical data
symbol = 'SPY'      # Symbol of the desired stock# List of symbols for technical indicators
INDICATORS = ['RSI', 'MACD', 'STOCH','ADL', 'ATR', 'MOM', 'MFI', 'ROC', 'OBV', 'CCI', 'EMV', 'VORTEX']

Here’s a link where you can find the actual names of some of these features.

在此链接中,您可以找到其中某些功能的实际名称。

Now we pull our historical data from yfinance. We don’t have many features to work with — not particularly useful unless we find a way to normalize them at least or derive more features from them.

现在,我们从yfinance中提取历史数据。 我们没有很多功能可以使用-除非我们找到一种方法至少可以将它们标准化或从中获得更多功能,否则它就没有什么用处。

"""
Next we pull the historical data using yfinance
Rename the column names because finta uses the lowercase names
"""start = (datetime.date.today() - datetime.timedelta( NUM_DAYS ) )
end = datetime.datetime.today()data = yf.download(symbol, start=start, end=end, interval=INTERVAL)
data.rename(columns={"Close": 'close', "High": 'high', "Low": 'low', 'Volume': 'volume', 'Open': 'open'}, inplace=True)
print(data.head())tmp = data.iloc[-60:]
tmp['close'].plot()
Image for post
Image for post
Data from the ‘Close’ column
来自“关闭”列的数据

2.数据处理与特征工程 (2. Data Processing & Feature Engineering)

We see that our data above is rough and contains lots of spikes for a time series. It isn’t very smooth and can be difficult for the model to extract trends from. To reduce the appearance of this we want to exponentially smooth our data before we compute the technical indicators.

我们看到上面的数据是粗糙的,并且包含一个时间序列的许多峰值。 它不是很平滑,模型很难从中提取趋势。 为了减少这种情况的出现,我们希望在计算技术指标之前以指数方式平滑数据。

"""
Next we clean our data and perform feature engineering to create new technical indicator features that our
model can learn from
"""def _exponential_smooth(data, alpha):"""Function that exponentially smooths dataset so values are less 'rigid':param alpha: weight factor to weight recent values more"""return data.ewm(alpha=alpha).mean()data = _exponential_smooth(data, 0.65)tmp1 = data.iloc[-60:]
tmp1['close'].plot()
Image for post
Data from ‘Close’, however its been smoothed
来自“关闭”的数据,但是已被平滑

We can see that the data is much more smoothed. Having many peaks and troughs can make it hard to approximate, or be difficult to extract tends when computing the technical indicators. It can throw the model off.

我们可以看到数据更加平滑。 在计算技术指标时,具有许多波峰和波谷会使其难以估计或难以提取趋势。 它可能会使模型失效。

Now it’s time to compute our technical indicators. As stated above, I use the finta library in combination with python’s built in eval function to quickly compute all the indicators in the INDICATORS list. I also compute some ema’s at different average lengths in addition to a normalized volume value.

现在该计算我们的技术指标了。 如上所述,我结合使用finta库和python的内置eval函数来快速计算INDICATORS列表中的所有指标。 除了归一化的体积值,我还计算了不同平均长度下的一些ema。

I remove the columns like ‘Open’, ‘High’, ‘Low’, and ‘Adj Close’ because we can get a good enough approximation with our ema’s in addition to the indicators. Volume has been proven to have a correlation with price fluctuations, which is why I normalized it.

我删除了诸如“打开”,“高”,“低”和“调整关闭”之类的列,因为除指标外,我们还可以获得与ema足够好的近似值。 交易量已被证明与价格波动有关,这就是为什么我将其标准化。

def _get_indicator_data(data):"""Function that uses the finta API to calculate technical indicators used as the features:return:"""for indicator in INDICATORS:ind_data = eval('TA.' + indicator + '(data)')if not isinstance(ind_data, pd.DataFrame):ind_data = ind_data.to_frame()data = data.merge(ind_data, left_index=True, right_index=True)data.rename(columns={"14 period EMV.": '14 period EMV'}, inplace=True)# Also calculate moving averages for featuresdata['ema50'] = data['close'] / data['close'].ewm(50).mean()data['ema21'] = data['close'] / data['close'].ewm(21).mean()data['ema15'] = data['close'] / data['close'].ewm(14).mean()data['ema5'] = data['close'] / data['close'].ewm(5).mean()# Instead of using the actual volume value (which changes over time), we normalize it with a moving volume averagedata['normVol'] = data['volume'] / data['volume'].ewm(5).mean()# Remove columns that won't be used as featuresdel (data['open'])del (data['high'])del (data['low'])del (data['volume'])del (data['Adj Close'])return datadata = _get_indicator_data(data)
print(data.columns)
Index(['close', 'RSI', 'MACD', 'SIGNAL', '14 period STOCH %K','MFV', '14 period ATR', 'MOM', '14 period MFI', 'ROC', 'OBV_x', 'OBV_y', '20 period CCI', '14 period EMV', 'VIm', 'VIp', 'ema50', 'ema21', 'ema14', 'ema5', 'normVol'], dtype='object')

Right before we gather our predictions, I decided to keep a small bit of data to predict future values with. This line captures 5 rows corresponding to the 5 days of the week on July 27th.

在我们收集预测之前,我决定保留少量数据来预测未来价值。 该行捕获与7月27日一周中的5天相对应的5行。

live_pred_data = data.iloc[-16:-11]

Now comes one of the most important part of this project — computing the truth values. Without these, we wouldn’t even be able to train a machine learning model to make predictions.

现在是该项目最重要的部分之一-计算真值。 没有这些,我们甚至无法训练机器学习模型来进行预测。

How do we obtain truth value? Well it’s quite intuitive. If we want to know when a stock will increase or decrease (to make a million dollars hopefully!) we would just need to look into the future and observe the price to determine if we should buy or sell right now. Well, with all this historical data, that’s exactly what we can do.

我们如何获得真理价值? 好吧,这很直观。 如果我们想知道什么时候股票会增加或减少(希望能赚到一百万美元!),我们只需要展望未来并观察价格,以确定我们现在是否应该买卖。 好吧,有了所有这些历史数据,这正是我们所能做的。

Going back to the table where we initially pulled our data, if we want to know the buy (1) or sell (0) decision on the day of 1993–03–29 (where the closing price was 11.4375), we just need to look X days ahead to see if the price is higher or lower than that on 1993–03–29. So if we look 1 day ahead, we see that the price increased to 11.5. So the truth value on 1993–03–29 would be a buy (1).

回到我们最初提取数据的表,如果我们想知道1993-03-29日(收盘价为11.4375)当天的买入(1)或卖出(0)决定,我们只需要提前X天查看价格是否高于或低于1993-03-29的价格。 因此,如果我们提前1天查看价格,价格会升至11.5。 因此,1993-03-29年的真值将为购买(1)。

Since this is also the last step of data processing, we remove all of the NaN value that our indicators and prediction generated, as well as removing the ‘close’ column.

由于这也是数据处理的最后一步,因此我们删除了指标和预测生成的所有NaN值,并删除了“关闭”列。

def _produce_prediction(data, window):"""Function that produces the 'truth' valuesAt a given row, it looks 'window' rows ahead to see if the price increased (1) or decreased (0):param window: number of days, or rows to look ahead to see what the price did"""prediction = (data.shift(-window)['close'] >= data['close'])prediction = prediction.iloc[:-window]data['pred'] = prediction.astype(int)return datadata = _produce_prediction(data, window=15)
del (data['close'])
data = data.dropna() # Some indicators produce NaN values for the first few rows, we just remove them here
data.tail()
Image for post

Because we used Pandas’ shift() function, we lose about 15 rows from the end of the dataset (which is why I captured the week of July 27th before this step).

因为我们使用了Pandas的shift()函数,所以从数据集的末尾损失了大约15行(这就是为什么我在此步骤之前于7月27日那周捕获了数据)。

Image for post

3.模型创建 (3. Model Creation)

Right before we train our model we must split up the data into a train set and test set. Obvious? That’s because it is. We have about a 80 : 20 split which is pretty good considering the amount of data we have.

在训练模型之前,我们必须将数据分为训练集和测试集。 明显? 那是因为。 考虑到我们拥有的数据量,我们大约进行了80:20的划分。

def _split_data(data):"""Function to partition the data into the train and test set:return:"""y = data['pred']features = [x for x in data.columns if x not in ['pred']]X = data[features]X_train, X_test, y_train, y_test = train_test_split(X, y, train_size= 4 * len(X) // 5)return X_train, X_test, y_train, y_testX_train, X_test, y_train, y_test = _split_data(data)
print('X Train : ' + str(len(X_train)))
print('X Test  : ' + str(len(X_test)))
print('y Train : ' + str(len(y_train)))
print('y Test  : ' + str(len(y_test)))
X Train : 5493
X Test : 1374
y Train : 5493
y Test : 1374

Next we’re going to use multiple classifiers to create an ensemble model. The goal here is to combine the predictions of several models to try and improve on predictability. For each sub-model, we’re also going to use a feature from Sklearn, GridSearchCV, to optimize each model for the best possible results.

接下来,我们将使用多个分类器创建一个集成模型。 此处的目标是结合几种模型的预测,以尝试并提高可预测性。 对于每个子模型,我们还将使用Sklearn的GridSearchCV功能来优化每个模型以获得最佳结果。

First we create the random forest model.

首先,我们创建随机森林模型。

def _train_random_forest(X_train, y_train, X_test, y_test):"""Function that uses random forest classifier to train the model:return:"""# Create a new random forest classifierrf = RandomForestClassifier()# Dictionary of all values we want to test for n_estimatorsparams_rf = {'n_estimators': [110,130,140,150,160,180,200]}# Use gridsearch to test all values for n_estimatorsrf_gs = GridSearchCV(rf, params_rf, cv=5)# Fit model to training datarf_gs.fit(X_train, y_train)# Save best modelrf_best = rf_gs.best_estimator_# Check best n_estimators valueprint(rf_gs.best_params_)prediction = rf_best.predict(X_test)print(classification_report(y_test, prediction))print(confusion_matrix(y_test, prediction))return rf_bestrf_model = _train_random_forest(X_train, y_train, X_test, y_test)
{'n_estimators': 160}
precision recall f1-score support
0.0 0.88 0.72 0.79 489
1.0 0.86 0.95 0.90 885
accuracy 0.87 1374
macro avg 0.87 0.83 0.85 1374
weighted avg 0.87 0.87 0.86 1374
[[353 136]
[ 47 838]]

Then the KNN model.

然后是KNN模型。

def _train_KNN(X_train, y_train, X_test, y_test):knn = KNeighborsClassifier()# Create a dictionary of all values we want to test for n_neighborsparams_knn = {'n_neighbors': np.arange(1, 25)}# Use gridsearch to test all values for n_neighborsknn_gs = GridSearchCV(knn, params_knn, cv=5)# Fit model to training dataknn_gs.fit(X_train, y_train)# Save best modelknn_best = knn_gs.best_estimator_# Check best n_neigbors valueprint(knn_gs.best_params_)prediction = knn_best.predict(X_test)print(classification_report(y_test, prediction))print(confusion_matrix(y_test, prediction))return knn_bestknn_model = _train_KNN(X_train, y_train, X_test, y_test)
{'n_neighbors': 1}
precision recall f1-score support
0.0 0.81 0.84 0.82 489
1.0 0.91 0.89 0.90 885
accuracy 0.87 1374
macro avg 0.86 0.86 0.86 1374
weighted avg 0.87 0.87 0.87 1374
[[411 78]
[ 99 786]]

Finally, the Gradient Boosted Tree.

最后,梯度提升树。

def _train_GBT(X_train, y_train, X_test, y_test):clf = GradientBoostingClassifier()# Dictionary of parameters to optimizeparams_gbt = {'n_estimators' :[150,160,170,180] , 'learning_rate' :[0.2,0.1,0.09] }# Use gridsearch to test all values for n_neighborsgrid_search = GridSearchCV(clf, params_gbt, cv=5)# Fit model to training datagrid_search.fit(X_train, y_train)gbt_best = grid_search.best_estimator_# Save best modelprint(grid_search.best_params_)prediction = gbt_best.predict(X_test)print(classification_report(y_test, prediction))print(confusion_matrix(y_test, prediction))return gbt_bestgbt_model = _train_GBT(X_train, y_train, X_test, y_test)
{'learning_rate': 0.2, 'n_estimators': 180}
precision recall f1-score support
0.0 0.81 0.70 0.75 489
1.0 0.85 0.91 0.88 885
accuracy 0.84 1374
macro avg 0.83 0.81 0.81 1374
weighted avg 0.83 0.84 0.83 1374
[[342 147]
[ 79 806]]

And now finally we create the voting classifier

现在,我们终于创建了投票分类器

def _ensemble_model(rf_model, knn_model, gbt_model, X_train, y_train, X_test, y_test):# Create a dictionary of our modelsestimators=[('knn', knn_model), ('rf', rf_model), ('gbt', gbt_model)]# Create our voting classifier, inputting our modelsensemble = VotingClassifier(estimators, voting='hard')#fit model to training dataensemble.fit(X_train, y_train)#test our model on the test dataprint(ensemble.score(X_test, y_test))prediction = ensemble.predict(X_test)print(classification_report(y_test, prediction))print(confusion_matrix(y_test, prediction))return ensembleensemble_model = _ensemble_model(rf_model, knn_model, gbt_model, X_train, y_train, X_test, y_test)
0.8748180494905385
precision recall f1-score support
0.0 0.89 0.75 0.82 513
1.0 0.87 0.95 0.90 861
accuracy 0.87 1374
macro avg 0.88 0.85 0.86 1374
weighted avg 0.88 0.87 0.87 1374
[[387 126]
[ 46 815]]

We can see that we gain slightly more accuracy by using ensemble modelling (given the confusion matrix results).

我们可以看到,通过使用集成建模(鉴于混淆矩阵结果),我们获得了更高的准确性。

4.结果验证 (4. Verification of Results)

For the next step we’re going to predict how the S&P500 will behave with our predictive model. I’m writing this article on the weekend of August 17th. So to see if this model can produce accurate results, I’m going to use the closing data from this week as the ‘truth’ values for the prediction. Since this model is tuned to have a 15 day window, we need to feed in the input data with the days in the week of July 27th.

下一步,我们将预测S&P500在预测模型中的表现。 我在8月17日的周末写这篇文章。 因此,要查看该模型是否可以产生准确的结果,我将使用本周的结束数据作为预测的“真实”值。 由于此模型已调整为具有15天的窗口,因此我们需要将输入数据与7月27日中的星期几一起输入。

July 27th -> August 17th

7月27日-> 8月17日

July 28th -> August 18th

7月28日-> 8月18日

July 29th -> August 19th

7月29日-> 8月19日

July 30th -> August 20th

7月30日-> 8月20日

July 31st -> August 21st

7月31日-> 8月21日

We saved the week we’re going to use in live_pred_data.

我们在live_pred_data中保存了将要使用的一周。

live_pred_data.head()
Image for post

Here are the five main days we are going to generate a prediction for. Looks like the models predicts that the price will increase for each day.

这是我们要进行预测的五个主要日子。 看起来模型预测价格会每天上涨。

Lets validate our prediction with the actual results.

让我们用实际结果验证我们的预测。

del(live_pred_data['close'])
prediction = ensemble_model.predict(live_pred_data)
print(prediction)[1. 1. 1. 1. 1.]

Results

结果

July 27th : $ 322.78 — August 17th : $ 337.91

7月27日:$ 322.78 — 8月17日:$ 337.91

July 28th : $ 321.74 — August 18th : $ 338.64

7月28日:321.74美元-8月18日:338.64美元

July 29th : $ 323.93 — August 19th : $ 337.23

7月29日:$ 323.93 — 8月19日:$ 337.23

July 30th : $ 323.95 — August 20th : $ 338.28

7月30日:$ 323.95-8月20日:$ 338.28

July 31st : $ 325.62 — August 21st : $ 339.48

7月31日:$ 325.62-8月21日:$ 339.48

As we can see from the actual results, we can confirm that the model was correct in all of its predictions. However there are many factors that go into determining the stock price, so to say that the model will produce similar results every time is naive. However, during relatively normal periods of time (without major panic that causes volatility in the stock market), the model should be able to produce good results.

从实际结果可以看出,我们可以确认该模型在所有预测中都是正确的。 但是,决定股价的因素有很多,因此说该模型每次都会产生相似的结果是很幼稚的。 但是,在相对正常的时间段内(没有引起股票市场波动的重大恐慌),该模型应该能够产生良好的结果。

5.总结 (5. Summary)

To summarize what we’ve done in this project,

总结一下我们在此项目中所做的工作,

  1. We’ve collected data to be used in analysis and feature creation.

    我们已经收集了用于分析和特征创建的数据。
  2. We’ve used pandas to compute many model features and produce clean data to help us in machine learning. Created predictions or truth values using pandas.

    我们已经使用熊猫来计算许多模型特征并生成干净的数据,以帮助我们进行机器学习。 使用熊猫创建预测或真值。
  3. Trained many machine learning models and then combined them using ensemble learning to produce higher prediction accuracy.

    训练了许多机器学习模型,然后使用集成学习对其进行组合以产生更高的预测精度。
  4. Ensured our predictions were accurate with real world data.

    确保我们的预测对真实世界的数据是准确的。

I’ve learned a lot about data science and machine learning through this project, and I hope you did too. Being my first article, I’d love any form of feedback to help improve my skills as a programmer and data scientist.

通过这个项目,我已经学到了很多有关数据科学和机器学习的知识,希望您也能做到。 作为我的第一篇文章,我希望获得任何形式的反馈,以帮助提高我作为程序员和数据科学家的技能。

Thanks for reading :)

谢谢阅读 :)

翻译自: https://towardsdatascience.com/predicting-future-stock-market-trends-with-python-machine-learning-2bf3f1633b3c

python机器学习预测

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/391045.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Python数据结构之四——set(集合)

Python版本:3.6.2 操作系统:Windows 作者:SmallWZQ 经过几天的回顾和学习,我终于把Python 3.x中的基础知识介绍好啦。下面将要继续什么呢?让我想想先~~~嗯,还是先整理一下近期有关Python基础知识的随笔吧…

knn 机器学习_机器学习:通过预测意大利葡萄酒的品种来观察KNN的工作方式

knn 机器学习Introduction介绍 For this article, I’d like to introduce you to KNN with a practical example.对于本文,我想通过一个实际的例子向您介绍KNN。 I will consider one of my project that you can find in my GitHub profile. For this project, …

python 实现分步累加_Python网页爬取分步指南

python 实现分步累加As data scientists, we are always on the look for new data and information to analyze and manipulate. One of the main approaches to find data right now is scraping the web for a particular inquiry.作为数据科学家,我们一直在寻找…

关于双黑洞和引力波,LIGO科学家回答了这7个你可能会关心的问题

引力波的成功探测,就像双黑洞的碰撞一样,一石激起千层浪。 关于双黑洞和引力波,LIGO科学家回答了这7个你可能会关心的问题 最近,引力波的成功探测,就像双黑洞的碰撞一样,一石激起千层浪。 大家兴奋之余&am…

用于MLOps的MLflow简介第1部分:Anaconda环境

在这三部分的博客中跟随了演示之后,您将能够: (After following along with the demos in this three part blog you will be able to:) Understand how you and your Data Science teams can improve your MLOps practices using MLflow 了解您和您的数…

pymc3 贝叶斯线性回归_使用PyMC3估计的贝叶斯推理能力

pymc3 贝叶斯线性回归内部AI (Inside AI) If you’ve steered clear of Bayesian regression because of its complexity, this article shows how to apply simple MCMC Bayesian Inference to linear data with outliers in Python, using linear regression and Gaussian ra…

mongodb分布式集群搭建手记

一、架构简介 目标 单机搭建mongodb分布式集群(副本集 分片集群),演示mongodb分布式集群的安装部署、简单操作。 说明 在同一个vm启动由两个分片组成的分布式集群,每个分片都是一个PSS(Primary-Secondary-Secondary)模式的数据副本集; Confi…

python16_day37【爬虫2】

一、异步非阻塞 1.自定义异步非阻塞 1 import socket2 import select3 4 class Request(object):5 def __init__(self,sock,func,url):6 self.sock sock7 self.func func8 self.url url9 10 def fileno(self): 11 return self.soc…

朴素贝叶斯实现分类_关于朴素贝叶斯分类及其实现的简短教程

朴素贝叶斯实现分类Naive Bayes classification is one of the most simple and popular algorithms in data mining or machine learning (Listed in the top 10 popular algorithms by CRC Press Reference [1]). The basic idea of the Naive Bayes classification is very …

2019年度年中回顾总结_我的2019年回顾和我的2020年目标(包括数量和收入)

2019年度年中回顾总结In this post were going to take a look at how 2019 was for me (mostly professionally) and were also going to set some goals for 2020! 🤩 在这篇文章中,我们将了解2019年对我来说(主要是职业)如何,我们还将为20…

vray阴天室内_阴天有话:第1部分

vray阴天室内When working with text data and NLP projects, word-frequency is often a useful feature to identify and look into. However, creating good visuals is often difficult because you don’t have a lot of options outside of bar charts. Lets face it; ba…

高光谱图像分类_高光谱图像分析-分类

高光谱图像分类初学者指南 (Beginner’s Guide) This article provides detailed implementation of different classification algorithms on Hyperspectral Images(HSI).本文提供了在高光谱图像(HSI)上不同分类算法的详细实现。 目录 (Table of Contents) Introduction to H…

机器人的动力学和动力学联系_通过机器学习了解幸福动力学(第2部分)

机器人的动力学和动力学联系Happiness is something we all aspire to, yet its key factors are still unclear.幸福是我们所有人都渴望的东西,但其关键因素仍不清楚。 Some would argue that wealth is the most important condition as it determines one’s li…

ubuntu 16.04 安装mysql

2019独角兽企业重金招聘Python工程师标准>>> 1) 安装 sudo apt-get install mysql-server apt-get isntall mysql-client apt-get install libmysqlclient-dev 2) 验证 sudo netstat -tap | grep mysql 如果有 就代表已经安装成功。 3)开启远程访问 1、 …

大样品随机双盲测试_训练和测试样品生成

大样品随机双盲测试This post aims to explore a step-by-step approach to create a K-Nearest Neighbors Algorithm without the help of any third-party library. In practice, this Algorithm should be useful enough for us to classify our data whenever we have alre…

JavaScript 基础,登录验证

<script></script>的三种用法&#xff1a;放在<body>中放在<head>中放在外部JS文件中三种输出数据的方式&#xff1a;使用 document.write() 方法将内容写到 HTML 文档中。使用 window.alert() 弹出警告框。使用 innerHTML 写入到 HTML 元素。使用 &qu…

从数据角度探索在新加坡的非法毒品

All things are poisons, for there is nothing without poisonous qualities. It is only the dose which makes a thing poison.” ― Paracelsus万物都是毒药&#xff0c;因为没有毒药就没有什么。 只是使事物中毒的剂量。” ― 寄生虫 执行摘要(又名TL&#xff1b; DR) (Ex…

Android 自定义View实现QQ运动积分抽奖转盘

因为偶尔关注QQ运动&#xff0c; 看到QQ运动的积分抽奖界面比较有意思&#xff0c;所以就尝试用自定义View实现了下&#xff0c;原本想通过开发者选项查看下界面的一些信息&#xff0c;后来发现积分抽奖界面是在WebView中展示的&#xff0c;应该是在H5页面中用js代码实现的&…

瑞立视:厚积薄发且具有“工匠精神”的中国品牌

一家成立两年的公司&#xff1a;是如何在VR行业趋于稳定的情况下首次融资就获得如此大额的金额呢&#xff1f; 2017年VR行业内宣布融资的公司寥寥无几&#xff0c;无论是投资人还是消费者对这个 “宠儿”都开始纷纷投以怀疑的目光。但就在2017年7月27日&#xff0c;深圳市一家…

CSV模块的使用

CSV模块的使用 1、csv简介 CSV (Comma Separated Values)&#xff0c;即逗号分隔值&#xff08;也称字符分隔值&#xff0c;因为分隔符可以不是逗号&#xff09;&#xff0c;是一种常用的文本 格式&#xff0c;用以存储表格数据&#xff0c;包括数字或者字符。很多程序在处理数…