a technique of 1.pandas
Apply () and applymap () are functions of the Dataframe data type, and map () is a function of the series data type. The action object of the Apply () dataframe a column or row of data, Applymap () is element-wise and is used for each of the dataframe data. Map () is also element-wise, calling a function once for each data in series. 2.PCA decomposition of the German DAX30 index
The DAX30 index has 30 stocks, it doesn't sound like much, but it's quite a lot, and it's necessary for us to make a principal component analysis and find out the most important stocks. Presumably the principle of PCA everyone should be aware that the white is in a return to find the impact of the largest number of, of course, the mathematical principle involved in matrix decomposition, what SVD ah.
First point code.
Import pandas as PD import Pandas.io.data as Web import numpy as NP np.random.seed (1000) Import scipy.stats as SCS import Statsmodels.api as SM import matplotlib as MPL import Matplotlib.pyplot as plt from sklearn.decomposition import KERNELPCA #导入机器学习的PCA包 symbols = [' ADS.] DE ', ' ALV. DE ', ' BAS. DE ', ' Bayn. DE ', ' BEI. DE ', ' BMW. DE ', ' CBK. DE ', ' CON. DE ', ' DAI. DE ', ' DB1. DE ', ' DBK. DE ', ' DPW. DE ', ' DTE. DE ', ' Eoan. DE ', ' FME. DE ', ' FRE. DE ', ' HEI. DE ', ' HEN3. DE ', ' IFX. DE ', ' LHA. DE ', ' LIN. DE ', ' Lxs. DE ', ' MRK. DE ', ' MUV2. DE ', ' RWE. DE ', ' SAP. DE ', ' SDF. DE ', ' SIE. DE ', ' TKA. DE ', ' VOW3. DE ', ' ^gdaxi '] #DAX30指数各个股票的代码以及德国30指数代码, a total of 31 data columns = PD. Dataframe () for sym in symbols: #获取数据 data[sym] = web. DataReader (sym,data_source = ' Yahoo ') [' close '] data = Data.dropna () #丢弃缺失数据 Dax = PD. Dataframe (Data.pop (' ^gdaxi ')) #将指数数据单独拿出来, using POPs has removed this column of data from its original location at the time of acquisition scale_function = lambda x: (X-x.mean ())/X.STD () PCA = KERNELPCA (). Fit (data.apply (scale_function)) #这里用到了apply函数. Before we do PCA, we need to standardize the data Get_we = Lambda x:x/x.sum () Print get_we (Pca.lambdas_) [: 10]
In this way, you can see the contribution of the top 10 stocks to the DAX30 index.
PCA = KERNELPCA (n_components = 1). Fit (data.apply (scale_function))
dax[' pca_1 '] =pca.transform (data)
Dax.apply (scale_function). Plot (figsize = (8,4))
PCA = KERNELPCA (n_components = 5). Fit (data.apply (scale_function) )
weights = Get_we (pca.lambdas_)
dax[' pca_5 '] =np.dot (pca.transform (data), weights)
Here, we use only the first ingredient to fit and the first five ingredients to fit, found the effect is surprisingly good. So we have done the work of dimensionality reduction. Let's start by looking at the effect of PCA.
Plt.figure (figsize = (8,4)) plt.scatter (dax[' pca_5 '],dax[' ^gdaxi ' '],color '
= ' R ')
Here, we draw the scatter plot of the value after the PCA and the original value.
We see that the overall effect is good, but obviously there is always a problem on both sides and in the middle, so if we want to improve, we can do PCA in the middle, so the effect should be better.