前馈神经网络中的前馈_前馈神经网络在基于趋势的交易中的有效性(1)

前馈神经网络中的前馈

This is a preliminary showcase of a collaborative research by Seouk Jun Kim (Daniel) and Sunmin Lee. You can find our contacts at the bottom of the article.

这是 Seouk Jun Kim(Daniel) Sunmin Lee 进行合作研究的初步展示 您可以在文章底部找到我们的联系人。

Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details.

Towards Data Science编辑的注意事项: 尽管我们允许独立作者按照我们的 规则和指南 发表文章 ,但我们不认可每位作者的贡献。 您不应在未征求专业意见的情况下依赖作者的作品。 有关 详细信息, 请参见我们的 阅读器条款

With the interest in artificial intelligence on the rise, numerous people have attempted to apply machine learning techniques in predicting the market, especially in the field of high-frequency trading using stock price time series data.

随着对人工智能的兴趣不断提高,许多人尝试将机器学习技术应用于预测市场,尤其是在使用股票价格时间序列数据进行高频交易的领域。

Just on Medium alone there are tens of posts on stock price prediction using RNN, LSTM, GRU, feed-forward neural nets, etc. Predicting the market, however, is no trivial task: it seems that most revealed attempts show less than required performance for a strategy based on the model prediction to succeed. In most posts the situation is either a perfect prediction (an indicator that the writer has definitely done something wrong) or a dismal result that discourages any further research.

仅在Medium上,就有数十种使用RNN,LSTM,GRU,前馈神经网络等进行股价预测的帖子。然而,预测市场并非易事:似乎大多数发现的尝试都显示出低于所需性能的结果。基于模型预测的策略获得成功。 在大多数职位中,情况要么是完美的预测(表明作者确实做错了事的指标),要么是令人沮丧的结果,不鼓励任何进一步的研究。

Aldridge and Avellaneda (2019), however, shows that there is hope in using neural networks for predicted returns. While the paper definitely demonstrates the limitations of a simple neural net, it also shows that through careful selection of training period and input data, a simple strategy based on neural net prediction could outperform the buy-and-hold strategy.

但是, Aldridge和Avellaneda(2019)表明,使用神经网络来预测回报是有希望的。 尽管本文明确说明了简单神经网络的局限性,但它也表明,通过仔细选择训练周期和输入数据,基于神经网络预测的简单策略可能会胜过购买并持有策略。

Our research hopes to reproduce Aldridge and Avellaneda (2019)’s results while also taking more time to explore the financial theoretical background. In the end we will use the research’s validated results to introduce more robust prediction models and strategies.

我们的研究希望重现Aldridge和Avellaneda(2019)的结果,同时还需要更多时间来探索金融理论背景。 最后,我们将使用研究的验证结果来介绍更强大的预测模型和策略。

In this specific article, we will discuss the early failures we had. More specifically, we will be training a feed-forward neural network with a fixed-window time-series input, explain the hypothesis behind why such training should work, and then explore why that hypothesis fails.

在本文中,我们将讨论我们早期的失败。 更具体地说,我们将使用固定窗口时间序列输入来训练前馈神经网络,解释为什么这种训练应该起作用的假设,然后探讨该假设失败的原因。

股票价格趋势分析背后的理由 (Rationale Behind Stock Price Trend-Based Analysis)

Since there are a million medium posts explaining the effectiveness and limitations of neural nets, we will cut to the chase and discuss why stock price trend-based analysis might work. That is, past stock price data + a few other data points might hold enough information to provide meaningful prediction.

由于有100万个中等职位来解释神经网络的有效性和局限性,因此我们将紧追其后,并讨论为什么基于股价趋势分析的方法可能行得通。 也就是说,过去的股票价格数据+其他一些数据点可能包含足够的信息以提供有意义的预测。

Pedersen (2015) describes the life cycle of a trend as the following:

Pedersen(2015)将趋势的生命周期描述如下:

  1. Start of the Trend: Underreaction to Information

    趋势的开始:对信息的React不足
  2. Trend Continuation: Delayed Overreaction

    趋势延续:过度React延迟
  3. End of Trend (Reversion to Fundamental Value)

    趋势结束(恢复至基本价值)

Pedersen (2015) mentions trend-following investing as a strategy for managed futures, but maybe some of the ideas could be applied for our case here.

Pedersen(2015)提到了将趋势追踪投资作为管理期货的一种策略,但也许有些想法可以在这里应用于我们的案例。

Although the extreme form of efficient market hypothesis states that the market instantaneously incorporates every information, both public and undisclosed, researches like Ho, Danny et al. (2019) has shown that people underreact to even publicly available data. Likewise, overreaction because of “herding” effect and events like margin call during a market crisis could cause the market price to move away from the asset’s fundamental value.

尽管有效市场假说的极端形式指出,市场即时整合了所有信息,包括公共信息和未公开信息,但Ho,Danny等人的研究却是这样 (2019)表明人们对公开数据React不足。 同样,由于“羊群效应”引起的过度React以及市场危机期间发生的追加保证金等事件,可能导致市场价格偏离资产的基本价值。

It is true that many of the inefficiencies of the market have already been exploited, thus strengthening the market efficiency. This is where the reason for using a neural net comes in. As underlined by Aldridge and Avellaneda (2019) and QRAFT (2020), the ability to detect nonlinear patterns in data could be essential in detecting financial patterns that have not been detected before. For example, Aldridge and Avellaneda (2019) cautiously attributes the superb performance of a neural net prediction operating on both SPY and target asset’s stock price data to the ability of neural nets to detect nonlinear patterns, overcoming the limitations of the famous CAPM’s dependence on linearity.

的确,许多低效率的市场已经被利用,从而增强了市场效率。 这就是使用神经网络的原因所在。正如Aldridge和Avellaneda(2019)QRAFT(2020)所强调的那样,检测数据中非线性模式的能力对于检测以前未检测到的财务模式可能至关重要 。 例如, Aldridge和Avellaneda(2019)谨慎地将基于SPY和目标资产的股价数据的神经网络预测的出色表现归因于神经网络检测非线性模式的能力,克服了著名的CAPM依赖于线性的局限性。

理论假设 (Theoretical Assumption)

The hypothesis that we held beginning the research was that a neural network trained with a large amount of stock price data would be able to detect the related stock’s deviation from its fundamental value. Although stock price data holds numerous noises and biases, we assumed that a large amount of train data will allow the neural net to train properly.

我们开始研究的假设是,经过大量股票价格数据训练的神经网络将能够检测到相关股票偏离其基本价值的情况。 尽管股价数据存在许多杂讯和偏见,但我们假设大量的训练数据将使神经网络能够正确训练。

困难的目标与认可 (Goal and Acknowledgment of the Difficulty)

The goal of the research was to beat the buy-and-hold strategy. As we were focused on uncovering the ability/inability of the neural net to make accurate future price predictions, we only emulated the trading strategy to work with one given asset. In hindsight, this was an incredibly ambitious project as beating the buy-and-hold strategy for one given asset while only using minimal financial data would promise phenomenal returns if expanded to a diverse portfolio. However, Aldridge and Avellaneda (2019) already shows that using stock data from only two assets (the target asset and the SPY) is sufficient to beat the buy-and-hold strategy with a relatively simple feed-forward neural net. So perhaps while this method might not work for any given asset, it could work for certain ones.

该研究的目的是击败购买和持有策略。 由于我们专注于发现神经网络进行准确的未来价格预测的能力/无能为力,因此我们仅模拟了一种交易策略,可以与一种给定资产一起使用。 事后看来,这是一个令人难以置信的雄心勃勃的项目,因为击败一项给定资产的购买和持有策略,而仅使用最少的财务数据,如果将其扩展到多样化的投资组合,将带来可观的回报。 但是, Aldridge和Avellaneda(2019)已经表明,仅使用两个资产(目标资产和SPY)的股票数据就足以用相对简单的前馈神经网络击败并购策略。 因此,虽然此方法可能不适用于任何给定资产,但它可能适用于某些资产。

要避免的陷阱 (Pitfalls To Avoid)

So before we go on…

所以在我们继续之前...

There are common pitfalls observable in too many Medium posts on time-series data analysis on stock price prediction.

在太多有关股票价格预测的时间序列数据分析的中型职位中,存在常见的陷阱。

  1. Standardizing/normalizing the entire dataset

    标准化/标准化整个数据集
  2. Training the whole dataset and testing within the trained dataset

    训练整个数据集并在训练后的数据集中进行测试
  3. Selecting input/output incorrectly, therefore producing a seemingly perfect prediction output that is in fact just the input with a time-lag (in most cases, training the model to predict stock prices, not returns)

    错误地选择了输入/输出,因此产生了看似完美的预测输出,实际上只是一个时间滞后的输入(在大多数情况下,训练模型以预测股价而不是回报)

While our research is not perfect, you can be assured that the pitfalls mentioned above are avoided.

尽管我们的研究还不够完善,但可以确保可以避免上述陷阱。

测试交易策略 (Trading Strategy for Testing)

As mentioned above, our test trading strategy does not create a portfolio; we use one asset’s return prediction to make an all-out-buy/sell decision.

如上所述,我们的测试交易策略不会创建投资组合。 我们使用一种资产的回报预测做出全买/卖决策。

  1. If the return prediction for the next trading period is positive, buy as many of that asset as possible using the current cash bank.

    如果下一个交易期的收益预测为正,请使用当前现金银行购买尽可能多的资产。
  2. If the return prediction for the next trading period is negative, liquidate everything.

    如果下一个交易期的收益预测为负,则清算所有资产。

In theory a well-trained neural net will be able to beat the buy-and-hold position with this strategy. The key factors that will enable excellent returns with this strategy are:

从理论上讲,训练有素的神经网络将可以用这种策略击败买入和持有的位置。 使用此策略可带来出色回报的关键因素是:

  1. Keep up with the increasing price of the stock during a bull market.

    在牛市期间跟上不断上涨的股票价格。
  2. Avoid the market crash, and ride the wave when the market picks up again.

    避免市场崩溃,并在市场回暖时乘风破浪。

The reason for selecting such a simple trading strategy is, firstly because this enables the performance of the strategy to be more directly ascribable to the prediction model, secondly because we lack more sophisticated financial knowledge, and thirdly because as a soldier with 400 dollars monthly income and a university student, we cannot afford the required computation power for complex portfolio management backtesting.

选择这样一个简单的交易策略的原因是,首先,因为这使得策略的性能可以更直接地归因于预测模型;其次,由于我们缺乏更复杂的财务知识,其次,因为作为一个月收入为400美元的士兵,对于一个大学生,我们无法承受复杂的投资组合管理回测所需的计算能力。

Also note that we will be assuming the risk-free rate is zero, and that we haven’t incorporated transaction cost in our backtests.

还要注意,我们将假设无风险利率为零,并且我们没有在回测中纳入交易成本。

训练固定窗口预测模型 (Training the Fixed-Window Prediction Model)

Those familiar with the subject might scoff as they read the specifications of the prediction model discussed here, as it is quite blatant that such model should not work. However, the failure of this model does provide key insights, and helps eliminate false assumptions, so it should be worth exploring.

熟悉此主题的人在阅读此处讨论的预测模型的规范时可能会sc之以鼻,因为这种模型不起作用是很公然的。 但是,此模型的失败确实提供了关键的见解,并有助于消除错误的假设,因此值得探讨。

假设条件 (Assumptions)

  1. Training a feed-forward neural net on a fixed 10~20 years of asset return data should enable the model to detect notable trends.

    在固定的10到20年的资产收益数据上训练前馈神经网络,应该使模型能够检测到显着的趋势。
  2. The detected trends could be applied to predict future asset returns.

    检测到的趋势可用于预测未来资产收益。

数据预处理 (Data Pre-processing)

I used yfinance to collect Yahoo Finance historical data. For ease of understanding I will introduce three terminologies that I used in the process.

我使用yfinance收集Yahoo Finance历史数据。 为了便于理解,我将介绍在此过程中使用的三种术语。

  1. Training period: the span of time from which training data is collected

    培训期:收集培训数据的时间跨度
  2. Training size: the size of the period from which the input data (train_x) to the neural net is assembled

    训练大小:组装输入数据(train_x)到神经网络的周期的大小
  3. Prediction period: the period of time we want to look forward into

    预测期:我们要期待的时间段

For the example here we will use training period of 18 years (using trading days between 1993, June 30th to 2010, June 30th), a training size of 100 days, and a prediction period of 21 days (= making monthly return predictions). The chosen asset will be the SPY ETF. We will only consider monthly returns as training data.

在此示例中,我们将使用18年的培训期(使用1993年6月30日至2010年6月30日之间的交易日),100天的培训规模以及21天的预测期(=进行月收益预测)。 选择的资产将是SPY ETF。 我们只会将每月收益作为培训数据。

Using pandas DataFrame makes the pre-processing process quite trivial.(Note that the description below explains the input and output as a numpy array, not in form of tensors)

使用pandas DataFrame可使预处理过程变得非常简单。 (请注意,以下说明将输入和输出解释为numpy数组,而不是张量形式)

  1. Shift the closing price by (-1) * prediction_period.

    将收盘价移动(-1)*预测周期。
  2. Calculate the forward return for each trading day using the shifted closing price.

    使用调整后的收盘价计算每个交易日的远期收益。
  3. Eliminate trading days with null values, then extract a list of lists of forward returns, each inner list with the size of training_size. This is our train_x.

    消除具有空值的交易日,然后提取一个预期收益列表列表,每个内部列表的大小为training_size。 这是我们的train_x。
  4. For each list of returns (each list within train_x), the target return should be the monthly forward return of the trading day that is a prediction_period away from the latest date included in the list.

    对于每个收益列表(train_x中的每个列表),目标收益应为交易日的月度远期收益,该月的远期收益距该列表中包含的最新日期的prediction_period。
  5. Then use the MinMaxScaler from sklearn to scaler.fit_transform the input data.

    然后使用sklearn中的MinMaxScaler缩放到scaler.fit_transform输入数据。

As you might notice step 3 and 4 could be modified so that instead of forward returns, historical returns [(close_t / close_(t - prediction_period)) - 1] could be used for the same effect.

您可能会注意到,可以修改第3步和第4步,以便代替前向收益,可以使用历史收益[[close_t / close_(t-projection_period))-1]来达到相同的效果。

You could also be wondering here why we chose to use the daily closing price and not the adjusted closing price. We assumed that the closing price will be ‘safer’ to use than the adjusted closing price considering the possible look-ahead bias. There are other opinions asserting that using the adjusted closing price is more adequate when backtesting on returns (not price!); this is something that we are continuing to look into. Through feature engineering, we hope to minimize the survivorship bias and look-ahead bias altogether in near future.

您可能还在想知道为什么我们选择使用每日收盘价而不是调整后的收盘价。 考虑到可能的前瞻性偏差,我们假设收盘价比调整后的收盘价“更安全”。 还有其他观点认为,在对收益进行回测(不是价格!)时,使用调整后的收盘价更为合适。 这是我们一直在研究的东西。 通过功能工程,我们希望在不久的将来将生存偏差和前瞻偏差全部降至最低。

Then what of the test set? We followed the same process detailed above, except that the data was gathered from about 10 years (using trading days between 2010, July 1st to 2020, July 1st), and that when scaling the test input, we used scaler.trasnform to avoid data leakage.

那测试集是什么? 除了从大约10年(使用2010年7月1日至2020年,7月1日之间的交易日)收集数据,并且在缩放测试输入时,我们使用scaler.trasnform来避免数据,我们遵循与上述相同的过程,不同的是泄漏。

选型 (Model Selection)

As it is obvious from the title of this article, we experimented with a feedforward neural network.

从本文标题可以明显看出,我们尝试了前馈神经网络。

To optimize the neural net and gain insight we tried numerous hyperparameters, activation functions, number of layers, etc, but there is one similarity that is shared.

为了优化神经网络并获得洞察力,我们尝试了许多超参数,激活函数,层数等,但是有一个相似之处可以共享。

In all models we use the tanh activation function for the final output layer. As tanh returns a value inbetween -1 and 1, it is a better choice than sigmoid as the output layer’s activation function, and since we are discounting leverage in our strategy there should not be a case in which the actual return goes beyond the range of a tanh function.

在所有模型中,我们将tanh激活函数用于最终输出层。 由于tanh返回的值介于-1和1之间,因此它比sigmoid更好,因为它是输出层的激活函数,并且由于我们在策略中不考虑杠杆率,因此实际情况不应超过实际范围。 tanh函数。

回测 (Backtesting)

一层隐藏层,均方误差损失函数 (One hidden layer, Mean squared error loss function)

Aldridge and Avellaneda (2019) shows that quite surprisingly, a simple neural net with a single hidden layer has enough computation power to emulate a crossover moving average strategy. It however becomes less clear whether Aldridge and Avellaneda (2019) maintains that same model structure throughout the paper, and it is only at the end of the paper that we see a successful instance of a prediction model that beats the buy-and-hold strategy.

Aldridge和Avellaneda(2019)表明,令人惊讶的是,一个具有单个隐藏层的简单神经网络具有足够的计算能力来模拟交叉移动平均策略。 但是,尚不清楚Aldridge和Avellaneda(2019)是否在整个论文中都保持相同的模型结构,只是在论文结尾处,我们才看到成功的预测模型实例击败了买入并持有策略。

We therefore tested performance with a neural net with one hidden layer first. Quite surprisingly, the first backtesting we tried with this model structure showed outstanding results.

因此,我们首先使用具有一个隐藏层的神经网络测试了性能。 令人惊讶的是,我们使用此模型结构进行的首次回测显示了出色的结果。

Image for post
Fig 1. blue: SPY, red: simulated strategy (x-axis: months, y-axis: dollars)
图1.蓝色:SPY,红色:模拟策略(x轴:月,y轴:美元)

A stunning 0.903 Sharpe ratio (zero risk-free rate) compared to 0.750 Sharpe (zero risk-free rate) of the buy-and-hold position of SPY ETF. Impressive, right? What could be wrong?

SPY ETF的买入和持有头寸的夏普比率为惊人的0.903夏普比率(零无风险利率),而夏普比率为0.750夏普(零无风险利率)。 令人印象深刻,对不对? 有什么事吗

Sadly, there are many problems.

可悲的是,有很多问题。

The first problem is that as excellent as the graph above seems, such results are not readily reproducible. Running the same code several times shows that the graph above is the result of pure luck. Most iterations of code shows poor Sharpe, returns, etc, and at that point, tinkering with the model to achieve a similar result as the one above effectively turns the test set into a validation set, putting credibility of any result in question.

第一个问题是,如上图所示,这种结果不容易再现。 多次运行相同的代码表明,上图是纯运气的结果。 大多数代码迭代都显示出不良的Sharpe,回报等,并且在那一点上,对模型进行修补以达到相似的结果,因为上面的模型有效地将测试集转换为验证集,从而使任何有问题的结果具有可信度。

The second problem, and the major problem, is that the performance of the model above mainly comes from predicting a positive return and leeching off the buy-and-hold position of the SPY. To put it in numbers, the model trained for the graph above predicts positive return 78% of the time, and in cases where it predicts a negative return, it is wrong 62% of the time.

第二个也是主要的问题是,上述模型的性能主要来自预测正收益并摆脱SPY的买入和持有头寸。 从数量上讲,为上图训练的模型在78%的时间内预测正收益,而在预测为负的情况下,则有62%的时间是错误的。

In other words, the model gets lucky so that the 38% of the time it is right with the negative return predictions, it nails it at critical locations where the asset’s price fluctuation is high, and then resorts to hugging onto the asset.

换句话说,该模型很幸运,因此有38%的时间使用负回报预测是正确的,它将其钉在资产价格波动高的关键位置,然后求助于资产。

The last problem is that the training loss stops decreasing at a pretty high point. This perhaps should have been the first indicator that one hidden layer is not going to do the magic; the neural net lacks the computation power to train properly, otherwise the training loss would keep decreasing.

最后一个问题是训练损失在相当高的时候停止下降。 这也许应该是表明一个隐藏层无法发挥作用的第一个指标。 神经网络缺乏适当训练的计算能力,否则训练损失将不断减少。

多个隐藏层,均方误差损失函数 (Multiple hidden layers, Mean squared error loss function)

There’s really nothing to be said here. The model prediction is awful.

这里真的没什么可说的。 模型预测非常糟糕。

The model with multiple hidden layers shows more promise than the one hidden layer one, in that predicted returns are not so lopsided to either the positive or negative realm. Which in turn just translates to the model sucking at making useful predictions. Yikes.

具有多个隐藏层的模型比一个隐藏层的模型显示出更多的希望,因为预测收益不会太偏向于正向或负向领域。 反过来,这也只是转化为模型在做出有用的预测时变得很糟糕。 kes。

Image for post
Fig 2. blue: SPY, red: simulated strategy (x-axis: months, y-axis: dollars)
图2.蓝色:SPY,红色:模拟策略(x轴:月,y轴:美元)

其他 (Others)

  • Different training periods

    不同的训练时间
  • Custom loss functions

    自定义损失功能
  • Different assets

    不同资产
  • A collection of data from different assets from the same training period

    收集来自同一培训期间的不同资产的数据
  • Averaging the prediction of several models trained with same hyperparameters and data to get consistent results

    对使用相同超参数和数据训练的几个模型进行平均预测以获得一致的结果

All the efforts above turned avail and confirmed that a fixed-window prediction model utilizing a feed-forward neural net cannot beat the buy-and-hold position.

以上所有努力均奏效,并证实了使用前馈神经网络的固定窗口预测模型无法击败买入并持有的头寸。

等等,然后Aldridge和Avellaneda(2019)传播虚假消息吗? (Wait, then is Aldridge and Avellaneda (2019) spreading false news?)

NO. The terrible performance of our backtests above is caused first and foremost by the deviation of our strategy from the one mentioned by Aldridge and Avellaneda (2019). One such major deviation is the use of fixed-window prediction model itself. From our interpretations of the paper, Aldridge and Avellaneda (2019) seems to have used a moving-window prediction model.

不行 我们的上述回测的糟糕表现首先是由于我们的策略与Aldridge和Avellaneda(2019)所提到的策略的偏离所致 这样的主要偏差之一是使用固定窗口预测模型本身。 根据我们对本文的解释, Aldridge和Avellaneda(2019)似乎使用了移动窗口预测模型。

In fact, digging into the theoretical background shows that fixed-window prediction model is bound to fail in making useful predictions.

实际上,深入理论背景表明,固定窗口预测模型必然无法做出有用的预测。

为什么固定窗口预测模型失败 (Why Fixed-Window Prediction Model Failed)

缺乏数据 (Lack of data)

We had more than 4000 data points as our input to the feed-forward neural net in the example above. In a traditional machine learning research, this might seem like an ideal amount of data. That is not the case with our project here, however.

在上面的示例中,我们有4000多个数据点作为前馈神经网络的输入。 在传统的机器学习研究中,这似乎是理想的数据量。 但是,这里的项目不是这种情况。

It is important to consider what pattern we hope our neural net trains to learn. Remember we mentioned in the beginning of the article, under Rationale Behind Stock Price Trend-Based Analysis, that the neural net should be successful if it could detect the overreaction and underreaction of the market.

重要的是要考虑我们希望神经网络训练的学习模式。 请记住,我们在本文开头的“ 基于股价趋势分析的原理”下提到,如果神经网络可以检测到市场的过度React和React不足,那么它应该是成功的。

What we conclude here is not backed by any evidence, but it seems that there are too many deviations of the market from the fundamental value of the asset that it becomes impossible for the neural net to detect the specific trend-based pattern we are looking for. That is, the market is not as efficient as we thought it would be, causing too much noise in the data.

我们在此得出的结论没有任何证据支持,但似乎市场与资产的基本价值之间存在太多偏差,因此神经网络无法检测到我们正在寻找的基于特定趋势的模式。 也就是说,市场效率不如我们预期的那样,从而导致数据中的噪音过多。

If there are certain patterns that only exist dominantly within the training period (1993, June 30th to 2010, June 30th), they could cause overfitting, which would hurt the prediction. Also, note that the pattern we hoped to look for can be described as an universal pattern — it should exist within any time frame because, as of yet, people do overreact and underreact to market information.

如果某些模式仅在训练期间(1993年6月30日至2010年6月30日)中占主导地位,则可能会导致过度拟合,从而影响预测。 另外,请注意,我们希望寻找的模式可以描述为一种通用模式-它应该存在于任何时间范围内,因为到目前为止,人们对市场信息确实React过度,React不足。

The effectiveness of the contrarian strategy demonstrated by Khandani and Lo (2007) shows us that even quite recently there existed large inefficiencies in the market that persisted for more than a decade (a Sharpe exceeding 50 was once achievable with a relatively simple strategy!). Taking this into consideration, a proper training of the neural net could require a century, if not multiple centuries, of raw data to be able to rid of persisting noise.

Khandani和Lo(2007)所展示的逆势策略的有效性向我们表明,即使在最近,市场上仍然存在着效率低下的巨大问题,这种低效率现象持续了十多年 (使用相对简单的策略,夏普就可以超过50个!)。 考虑到这一点,对神经网络进行适当的训练可能需要一个世纪(即使不是几个世纪)的原始数据才能消除持续存在的噪声。

The conclusion can be the following: if we want viable feed-forward neural net predictions, we either need immensely more data, or more polished and engineered data (unlike the raw ones we used here).

结论可以是:如果我们想要可行的前馈神经网络预测,我们要么需要更多的数据,要么需要更多的经过精炼和工程化的数据(与此处使用的原始数据不同)。

Another alternative is to switch the trend pattern we are looking for; we stop looking for a universal pattern, and instead go for the most recent pattern that we can detect. This is the idea behind the moving-window prediction.

另一种选择是切换我们正在寻找的趋势模式。 我们不再寻找通用模式,而是寻找我们可以检测到的最新模式。 这就是移动窗口预测背后的想法。

数据不平衡 (Imbalance in data)

Another problem with our selection of time period is that it is slightly more bullish than bearish. By just predicting every future return to be positive, the neural net could achieve a directional prediction accuracy near 56%.

我们选择时间周期的另一个问题是,看涨比看跌略微多。 通过仅预测每个未来的回报都是正数,神经网络可以实现接近56%的方向预测精度。

This would not be a problem if it is mechanically possible to achieve a higher prediction accuracy than the 56%. But what if the best prediction accuracy we could achieve through this method, when not sticking to one specific directional prediction, is below 56%? Then it makes sense for the neural net to predict every next return to be between 0 and 1, as that might be a straightforward way to achieve the lowest loss.

如果机械上可以实现比56%更高的预测精度,那么这将不是问题。 但是,如果不遵循一种特定的方向预测,而通过这种方法可以达到的最佳预测精度低于56%呢? 然后,对于神经网络来说,预测下一个返回值在0到1之间是有意义的,因为这可能是实现最低损失的直接方法。

The dialogue above in reality is actually irrelevant to most of the processes undertaken during the research, as in most cases the neural net overfits, meaning it surpasses the 56% accuracy in direction prediction. But it has proven to be impediments in our earliest efforts, so we mention it for those who might attempt to reproduce our work.

实际上,以上对话实际上与研究过程中进行的大多数过程无关,因为在大多数情况下,神经网络过拟合,这意味着其方向预测精度超过了56%。 但是事实证明,这是我们最早的努力中的障碍,因此对于那些可能会尝试复制我们作品的人来说,我们会提到它。

任务的野心 (The Ambitiousness of the Task)

When we pause to think again about the goal of the task and the little amount of information we utilized to make strides towards that goal, the more we realize the ambitiousness of the task.

当我们停下来重新考虑任务的目标以及为实现该目标而大步使用的少量信息时,我们就越能意识到任务的雄心。

Up so far, the results of the experiments above do tell us that more sophisticated methods are warranted.

到目前为止,以上实验的结果确实告诉我们,需要更复杂的方法。

那下一步呢? (So What Next?)

The results above are far from discouraging. In a sense, the methods for asset return prediction we described in the article were bound to fail; our actual implementations only confirmed that belief.

上面的结果绝不令人沮丧。 从某种意义上说,我们在本文中描述的资产收益预测方法注定会失败; 我们的实际实现只证实了这一信念。

As we discussed above, there are alternate methods to punch through the problem of lack of data. Currently we are implementing moving-window prediction model with varying engineered features and different machine learning models, and are beginning to see improvements.

正如我们上面讨论的那样,有其他方法可以解决数据不足的问题。 当前,我们正在实现具有变化的工程特征和不同的机器学习模型的移动窗口预测模型,并且开始看到改进。

Please give us feedback if you notice logical fallacies or errors in implementation! Recommendations on papers, posts, courses, etc are also welcome.

如果您发现逻辑错误或实施错误,请给我们反馈! 也欢迎有关论文,职位,课程等的建议。

作者 (Authors)

  1. Seouk Jun Kim

    金硕俊

    Seouk Jun Kimhttps://www.linkedin.com/in/seouk-jun-kim-a74921184/

    Seouk Jun Kim https://www.linkedin.com/in/seouk-jun-kim-a74921184 /

  2. Sunmin Lee

    李新民

    Sunmin Leehttps://www.linkedin.com/in/sun-min-lee-3116aa123/

    李新民https://www.linkedin.com/in/sun-min-lee-3116aa123/

资料来源 (Sources)

  1. Aldridge, Irene E. and Marco Avellaneda. “Neural Networks in Finance: Design and Performance.” (2019).

    Aldridge,Irene E.和Marco Avellaneda。 “金融中的神经网络:设计和性能。” (2019)。
  2. Pedersen, Lasse Heje. “Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined.” (2015)

    Pedersen,Lasse Heje。 “效率低下:如何确定精明的货币投资和市场价格。” (2015年)
  3. Ho, Danny and Huang, Yuxuan and Capretz, Luiz F. “Neural Network Models for Stock Selection Based on Fundamental Analysis” (2019).

    Ho,Danny和Huang,Yuxuan和Capretz,Luiz F.“基于基本面分析的股票选择神经网络模型”(2019)。
  4. QRAFT. “AI Asset Management Report. How can AI innovate asset management?” Qraft Technologies. Medium., (2020)

    QRAFT。 “ AI资产管理报告。 人工智能如何创新资产管理?” Qraft Technologies。 中(2020)

  5. Khandani, Amir E. and Lo, Andrew W., What Happened to the Quants in August 2007? (November 4, 2007)

    Khandani,Amir E.和Lo,Andrew W.,2007年8月发生了什么? (2007年11月4日)
  6. yfinance, https://pypi.org/project/yfinance/

    yfinance, https: //pypi.org/project/yfinance/

翻译自: https://towardsdatascience.com/the-effectiveness-of-feed-forward-neural-networks-in-trend-based-trading-1-4074912cf5cd

前馈神经网络中的前馈

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/390603.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

hadoop将消亡_数据科学家:适应还是消亡!

hadoop将消亡Harvard Business Review marked the boom of Data Scientists in their famous 2012 article “Data Scientist: Sexiest Job”, followed by untenable demand in the past decade. [3]《哈佛商业评论 》在2012年著名的文章“数据科学家:最性感的工作…

剑指 Offer 15. 二进制中1的个数 and leetcode 1905. 统计子岛屿

题目 请实现一个函数,输入一个整数(以二进制串形式),输出该数二进制表示中 1 的个数。例如,把 9 表示成二进制是 1001,有 2 位是 1。因此,如果输入 9,则该函数输出 2。 示例 1&…

httpd2.2的配置文件常见设置

目录 1、启动报错:提示没有名字fqdn2、显示服务器版本信息3、修改监听的IP和Port3、持久连接4 、MPM( Multi-Processing Module )多路处理模块5 、DSO:Dynamic Shared Object6 、定义Main server (主站点) …

leetcode 149. 直线上最多的点数

题目 给你一个数组 points ,其中 points[i] [xi, yi] 表示 X-Y 平面上的一个点。求最多有多少个点在同一条直线上。 示例 1: 输入:points [[1,1],[2,2],[3,3]] 输出:3 示例 2: 输入:points [[1,1],[3,…

静态代理设计与动态代理设计

静态代理设计模式 代理设计模式最本质的特质:一个真实业务主题只完成核心操作,而所有与之辅助的功能都由代理类来完成。 例如,在进行数据库更新的过程之中,事务处理必须起作用,所以此时就可以编写代理设计模式来完成。…

6.3 遍历字典

遍历所有的键—值对 遍历字典时,键—值对的返回顺序也与存储顺序不同。 6.3.2 遍历字典中的所有键 在不需要使用字典中的值时,方法keys() 很有用。 6.3.3 按顺序遍历字典中的所有键 要以特定的顺序返回元素,一种办法是在for 循环中对返回的键…

Google Guava新手教程

以下资料整理自网络 一、Google Guava入门介绍 引言 Guavaproject包括了若干被Google的 Java项目广泛依赖 的核心库,比如:集合 [collections] 、缓存 [caching] 、原生类型支持 [primitives support] 、并发库 [concurrency libraries] 、通用注解 [comm…

数据科学领域有哪些技术_领域知识在数据科学中到底有多重要?

数据科学领域有哪些技术Jeremie Harris: “In a way, it’s almost like a data scientist or a data analyst has to be like a private investigator more than just a technical person.”杰里米哈里斯(Jeremie Harris) :“ 从某种意义上说,这就像是数…

初创公司怎么做销售数据分析_为什么您的初创企业需要数据科学来解决这一危机...

初创公司怎么做销售数据分析The spread of coronavirus is delivering a massive blow to the global economy. The lockdown and work from home restrictions have forced thousands of startups to halt expansion plans, cancel services, and announce layoffs.冠状病毒的…

leetcode 909. 蛇梯棋

题目 N x N 的棋盘 board 上,按从 1 到 N*N 的数字给方格编号,编号 从左下角开始,每一行交替方向。 例如,一块 6 x 6 大小的棋盘,编号如下: r 行 c 列的棋盘,按前述方法编号,棋盘格…

Python基础之window常见操作

一、window的常见操作: cd c:\ #进入C盘d: #从C盘切换到D盘 cd python #进入目录cd .. #往上走一层目录dir #查看目录文件列表cd ../.. #往上上走一层目录 二、常见的文件后缀名: .txt 记事本文本文件.doc word文件.xls excel文件.ppt PPT文件.exe 可执行…

WPF效果(GIS三维篇)

二维的GIS已经被我玩烂了,紧接着就是三维了,哈哈!先来看看最简单的效果: 转载于:https://www.cnblogs.com/OhMonkey/p/8954626.html

r软件时间序列分析论文_高度比较的时间序列分析-一篇论文评论

r软件时间序列分析论文数据科学 , 机器学习 (Data Science, Machine Learning) In machine learning with time series, using features extracted from series is more powerful than simply treating a time series in a tabular form, with each date/timestamp …

leetcode 168. Excel表列名称

题目 给你一个整数 columnNumber ,返回它在 Excel 表中相对应的列名称。 例如: A -> 1 B -> 2 C -> 3 … Z -> 26 AA -> 27 AB -> 28 … 示例 1: 输入:columnNumber 1 输出:“A” 示例 2&…

selenium抓取_使用Selenium的网络抓取电子商务网站

selenium抓取In this article we will go through a web scraping process of an E-Commerce website. I have designed this particular post to be beginner friendly. So, if you have no prior knowledge about web scraping or Selenium you can still follow along.在本文…

剑指 Offer 37. 序列化二叉树

题目 序列化是将一个数据结构或者对象转换为连续的比特位的操作,进而可以将转换后的数据存储在一个文件或者内存中,同时也可以通过网络传输到另一个计算机环境,采取相反方式重构得到原数据。 请设计一个算法来实现二叉树的序列化与反序列化…

一个简单的 js 时间对象创建

JS中获取时间很常见,凑凑热闹,也获取一个时间对象试试 首先,先了解js的获取时间函数如下: var myDate new Date(); //创建一个时间对象 myDate.getYear(); // 获取当前年份(2位&#x…

裁判打分_内在的裁判偏见

裁判打分News flash: being an umpire is hard. Their job is to judge whether a ball that’s capable of moving upwards of 100 MPH or breaking 25 inches crossed through an imaginary zone before being caught. I don’t think many would argue that they have it ea…

LCP 07. 传递信息

小朋友 A 在和 ta 的小伙伴们玩传信息游戏,游戏规则如下: 有 n 名玩家,所有玩家编号分别为 0 ~ n-1,其中小朋友 A 的编号为 0 每个玩家都有固定的若干个可传信息的其他玩家(也可能没有)。传信息…

微信公众号自动回复加超链接最新可用实现方案

你在管理微信号时是否会有自动回复或者在关键字触发自动回复加一个超链接的需求呢&#xff1f;例如下图像王者荣耀这样&#xff1a; 很多有开发经验的朋友都知道微信管理平台会类似富文本编辑器&#xff0c;第一想到的解决方案会是在编辑框中加<a href网址 >显示文字<…