当前位置:  开发笔记 > 编程语言 > 正文

Python中的主成分分析

如何解决《Python中的主成分分析》经验,为你挑选了9个好方法。

我想使用主成分分析(PCA)来降低维数.numpy或scipy已经拥有它,还是我必须自己使用numpy.linalg.eigh

我不只是想使用奇异值分解(SVD),因为我的输入数据是相当高维的(~460维),所以我认为SVD比计算协方差矩阵的特征向量慢.

我希望找到一个预制的,已调试的实现,它已经为何时使用哪种方法做出了正确的决定,并且可能做了其他我不了解的优化.



1> denis..:

几个月后,这里有一个小班PCA和一张图片:

#!/usr/bin/env python
""" a small class for Principal Component Analysis
Usage:
    p = PCA( A, fraction=0.90 )
In:
    A: an array of e.g. 1000 observations x 20 variables, 1000 rows x 20 columns
    fraction: use principal components that account for e.g.
        90 % of the total variance

Out:
    p.U, p.d, p.Vt: from numpy.linalg.svd, A = U . d . Vt
    p.dinv: 1/d or 0, see NR
    p.eigen: the eigenvalues of A*A, in decreasing order (p.d**2).
        eigen[j] / eigen.sum() is variable j's fraction of the total variance;
        look at the first few eigen[] to see how many PCs get to 90 %, 95 % ...
    p.npc: number of principal components,
        e.g. 2 if the top 2 eigenvalues are >= `fraction` of the total.
        It's ok to change this; methods use the current value.

Methods:
    The methods of class PCA transform vectors or arrays of e.g.
    20 variables, 2 principal components and 1000 observations,
    using partial matrices U' d' Vt', parts of the full U d Vt:
    A ~ U' . d' . Vt' where e.g.
        U' is 1000 x 2
        d' is diag([ d0, d1 ]), the 2 largest singular values
        Vt' is 2 x 20.  Dropping the primes,

    d . Vt      2 principal vars = p.vars_pc( 20 vars )
    U           1000 obs = p.pc_obs( 2 principal vars )
    U . d . Vt  1000 obs, p.obs( 20 vars ) = pc_obs( vars_pc( vars ))
        fast approximate A . vars, using the `npc` principal components

    Ut              2 pcs = p.obs_pc( 1000 obs )
    V . dinv        20 vars = p.pc_vars( 2 principal vars )
    V . dinv . Ut   20 vars, p.vars( 1000 obs ) = pc_vars( obs_pc( obs )),
        fast approximate Ainverse . obs: vars that give ~ those obs.


Notes:
    PCA does not center or scale A; you usually want to first
        A -= A.mean(A, axis=0)
        A /= A.std(A, axis=0)
    with the little class Center or the like, below.

See also:
    http://en.wikipedia.org/wiki/Principal_component_analysis
    http://en.wikipedia.org/wiki/Singular_value_decomposition
    Press et al., Numerical Recipes (2 or 3 ed), SVD
    PCA micro-tutorial
    iris-pca .py .png

"""

from __future__ import division
import numpy as np
dot = np.dot
    # import bz.numpyutil as nu
    # dot = nu.pdot

__version__ = "2010-04-14 apr"
__author_email__ = "denis-bz-py at t-online dot de"

#...............................................................................
class PCA:
    def __init__( self, A, fraction=0.90 ):
        assert 0 <= fraction <= 1
            # A = U . diag(d) . Vt, O( m n^2 ), lapack_lite --
        self.U, self.d, self.Vt = np.linalg.svd( A, full_matrices=False )
        assert np.all( self.d[:-1] >= self.d[1:] )  # sorted
        self.eigen = self.d**2
        self.sumvariance = np.cumsum(self.eigen)
        self.sumvariance /= self.sumvariance[-1]
        self.npc = np.searchsorted( self.sumvariance, fraction ) + 1
        self.dinv = np.array([ 1/d if d > self.d[0] * 1e-6  else 0
                                for d in self.d ])

    def pc( self ):
        """ e.g. 1000 x 2 U[:, :npc] * d[:npc], to plot etc. """
        n = self.npc
        return self.U[:, :n] * self.d[:n]

    # These 1-line methods may not be worth the bother;
    # then use U d Vt directly --

    def vars_pc( self, x ):
        n = self.npc
        return self.d[:n] * dot( self.Vt[:n], x.T ).T  # 20 vars -> 2 principal

    def pc_vars( self, p ):
        n = self.npc
        return dot( self.Vt[:n].T, (self.dinv[:n] * p).T ) .T  # 2 PC -> 20 vars

    def pc_obs( self, p ):
        n = self.npc
        return dot( self.U[:, :n], p.T )  # 2 principal -> 1000 obs

    def obs_pc( self, obs ):
        n = self.npc
        return dot( self.U[:, :n].T, obs ) .T  # 1000 obs -> 2 principal

    def obs( self, x ):
        return self.pc_obs( self.vars_pc(x) )  # 20 vars -> 2 principal -> 1000 obs

    def vars( self, obs ):
        return self.pc_vars( self.obs_pc(obs) )  # 1000 obs -> 2 principal -> 20 vars


class Center:
    """ A -= A.mean() /= A.std(), inplace -- use A.copy() if need be
        uncenter(x) == original A . x
    """
        # mttiw
    def __init__( self, A, axis=0, scale=True, verbose=1 ):
        self.mean = A.mean(axis=axis)
        if verbose:
            print "Center -= A.mean:", self.mean
        A -= self.mean
        if scale:
            std = A.std(axis=axis)
            self.std = np.where( std, std, 1. )
            if verbose:
                print "Center /= A.std:", self.std
            A /= self.std
        else:
            self.std = np.ones( A.shape[-1] )
        self.A = A

    def uncenter( self, x ):
        return np.dot( self.A, x * self.std ) + np.dot( x, self.mean )


#...............................................................................
if __name__ == "__main__":
    import sys

    csv = "iris4.csv"  # wikipedia Iris_flower_data_set
        # 5.1,3.5,1.4,0.2  # ,Iris-setosa ...
    N = 1000
    K = 20
    fraction = .90
    seed = 1
    exec "\n".join( sys.argv[1:] )  # N= ...
    np.random.seed(seed)
    np.set_printoptions( 1, threshold=100, suppress=True )  # .1f
    try:
        A = np.genfromtxt( csv, delimiter="," )
        N, K = A.shape
    except IOError:
        A = np.random.normal( size=(N, K) )  # gen correlated ?

    print "csv: %s  N: %d  K: %d  fraction: %.2g" % (csv, N, K, fraction)
    Center(A)
    print "A:", A

    print "PCA ..." ,
    p = PCA( A, fraction=fraction )
    print "npc:", p.npc
    print "% variance:", p.sumvariance * 100

    print "Vt[0], weights that give PC 0:", p.Vt[0]
    print "A . Vt[0]:", dot( A, p.Vt[0] )
    print "pc:", p.pc()

    print "\nobs <-> pc <-> x: with fraction=1, diffs should be ~ 0"
    x = np.ones(K)
    # x = np.ones(( 3, K ))
    print "x:", x
    pc = p.vars_pc(x)  # d' Vt' x
    print "vars_pc(x):", pc
    print "back to ~ x:", p.pc_vars(pc)

    Ax = dot( A, x.T )
    pcx = p.obs(x)  # U' d' Vt' x
    print "Ax:", Ax
    print "A'x:", pcx
    print "max |Ax - A'x|: %.2g" % np.linalg.norm( Ax - pcx, np.inf )

    b = Ax  # ~ back to original x, Ainv A x
    back = p.vars(b)
    print "~ back again:", back
    print "max |back - x|: %.2g" % np.linalg.norm( back - x, np.inf )

# end pca.py

在此输入图像描述


Fyinfo,2011年1月,C.Caramanis对[Robust PCA](http://videolectures.net/nipsworkshops2010_caramanis_rcf/snippet/)进行了精彩的演讲.

2> ali_m..:

PCA使用numpy.linalg.svd非常简单.这是一个简单的演示:

import numpy as np
import matplotlib.pyplot as plt
from scipy.misc import lena

# the underlying signal is a sinusoidally modulated image
img = lena()
t = np.arange(100)
time = np.sin(0.1*t)
real = time[:,np.newaxis,np.newaxis] * img[np.newaxis,...]

# we add some noise
noisy = real + np.random.randn(*real.shape)*255

# (observations, features) matrix
M = noisy.reshape(noisy.shape[0],-1)

# singular value decomposition factorises your data matrix such that:
# 
#   M = U*S*V.T     (where '*' is matrix multiplication)
# 
# * U and V are the singular matrices, containing orthogonal vectors of
#   unit length in their rows and columns respectively.
#
# * S is a diagonal matrix containing the singular values of M - these 
#   values squared divided by the number of observations will give the 
#   variance explained by each PC.
#
# * if M is considered to be an (observations, features) matrix, the PCs
#   themselves would correspond to the rows of S^(1/2)*V.T. if M is 
#   (features, observations) then the PCs would be the columns of
#   U*S^(1/2).
#
# * since U and V both contain orthonormal vectors, U*V.T is equivalent 
#   to a whitened version of M.

U, s, Vt = np.linalg.svd(M, full_matrices=False)
V = Vt.T

# PCs are already sorted by descending order 
# of the singular values (i.e. by the
# proportion of total variance they explain)

# if we use all of the PCs we can reconstruct the noisy signal perfectly
S = np.diag(s)
Mhat = np.dot(U, np.dot(S, V.T))
print "Using all PCs, MSE = %.6G" %(np.mean((M - Mhat)**2))

# if we use only the first 20 PCs the reconstruction is less accurate
Mhat2 = np.dot(U[:, :20], np.dot(S[:20, :20], V[:,:20].T))
print "Using first 20 PCs, MSE = %.6G" %(np.mean((M - Mhat2)**2))

fig, [ax1, ax2, ax3] = plt.subplots(1, 3)
ax1.imshow(img)
ax1.set_title('true image')
ax2.imshow(noisy.mean(0))
ax2.set_title('mean of noisy images')
ax3.imshow((s[0]**(1./2) * V[:,0]).reshape(img.shape))
ax3.set_title('first spatial PC')
plt.show()


@Alex足够公平.我认为这是[XY问题]的另一种变体(http://meta.stackexchange.com/a/66378/247805) - OP表示他不想要基于SVD的解决方案,因为他认为*SVD会太慢了,可能没有尝试过.在这种情况下,我个人认为解释如何解决更广泛的问题更有帮助,而不是完全以原始的,较窄的形式回答问题.
我意识到我在这里有点晚了,但OP特别要求一个解决方案_avoids_奇异值分解.

3> Noam Peled..:

你可以使用sklearn:

import sklearn.decomposition as deco
import numpy as np

x = (x - np.mean(x, 0)) / np.std(x, 0) # You need to normalize your data first
pca = deco.PCA(n_components) # n_components is the components number after reduction
x_r = pca.fit(x).transform(x)
print ('explained variance (first %d components): %.2f'%(n_components, sum(pca.explained_variance_ratio_)))



4> tom10..:

matplotlib.mlab有一个PCA实现.


[PCA of matplotlib]的链接(http://matplotlib.sourceforge.net/api/mlab_api.html#matplotlib.mlab.PCA)已更新.
PCA的matplotlib.mlab实现使用SVD.
这是[更详细的描述](http://www.clear.rice.edu/comp130/12spring/pca/pca_docs.shtml)的功能以及如何使用.

5> ChristopheD..:

您可以查看MDP.

我自己没有机会测试它,但我已经将它完全标记为PCA功能.


MDP自2012年以来一直没有得到维护,看起来不是最好的解决方案.

6> dwf..:

SVD应该可以正常工作460维.我的Atom上网本大约需要7秒钟.eig()方法需要更多时间(因为它应该使用更多的浮点运算)并且几乎总是不太准确.

如果您的示例少于460个,那么您想要做的是对散化矩阵(x - datamean)^ T(x - mean),假设您的数据点是列,然后左乘(x - datamean).如果您的维度多于数据,那么这可能会更快.



7> Anony-Mousse..:

你可以很容易地"滚动"你自己的使用scipy.linalg(假设一个预先居中的数据集data):

covmat = data.dot(data.T)
evs, evmat = scipy.linalg.eig(covmat)

那么evs你的特征值evmat是你的投影矩阵.

如果要保持d尺寸,请使用第一个d特征值和第一个d特征向量.

鉴于scipy.linalg分解和numpy矩阵乘法,你还需要什么?


你应该看看@dwf对[这个答案](http://stackoverflow.com/a/1732758/3005513)的评论,了解在协方差矩阵上使用`eig()`的危险.

8> sunqiang..:

我刚刚读完了机器学习:算法视角这本书.本书中的所有代码示例都是由Python编写的(几乎与Numpy一起编写).chatper10.2主要组件分析的代码片段可能值得一读.它使用numpy.linalg.eig.
顺便说一句,我认为SVD可以很好地处理460*460尺寸.我在一台非常老的PC上用numpy/scipy.linalg.svd计算了6500*6500 SVD:Pentium III 733mHz.说实话,脚本需要大量内存(约1.xG)和大量时间(约30分钟)才能获得SVD结果.但我认为现代PC上的460*460不会是一个大问题,除非你需要做很多次SVD.


当你可以简单地使用svd()时,你不应该在协方差矩阵上使用eig().根据您计划使用的组件数量和数据矩阵的大小,前者引入的数值误差(它执行更多浮点运算)可能会变得非常重要.出于同样的原因,如果您真正感兴趣的是向量或矩阵的反时间,则不应该使用inv()显式反转矩阵; 你应该使用solve()代替.

9> Nicolas Barb..:

您不需要完整的奇异值分解(SVD)来计算所有特征值和特征向量,并且对于大型矩阵来说可能是禁止的. scipy及其稀疏模块提供了在稀疏和密集矩阵上工作的通用线性algrebra函数,其中有eig*函数族:

http://docs.scipy.org/doc/scipy/reference/sparse.linalg.html#matrix-factorizations

Scikit-learn提供了一个Python PCA实现,目前只支持密集矩阵.

时间:

In [1]: A = np.random.randn(1000, 1000)

In [2]: %timeit scipy.sparse.linalg.eigsh(A)
1 loops, best of 3: 802 ms per loop

In [3]: %timeit np.linalg.svd(A)
1 loops, best of 3: 5.91 s per loop


您不需要从密集矩阵计算稀疏矩阵.sparse.linalg模块中提供的算法仅依赖于通过Operator对象的matvec方法的矩阵向量乘法运算.对于密集矩阵,这就像matvec = dot(A,x).出于同样的原因,您不需要计算协方差矩阵,而只需要为A提供操作点(AT,点(A,x)).
实际上,我认为我偏向于大型矩阵.对我来说,大矩阵更像是10⁶*10⁶而不是1000*1000.在这种情况下,你经常甚至不能存储协方差矩阵......
推荐阅读
Chloemw
这个屌丝很懒,什么也没留下!
DevBox开发工具箱 | 专业的在线开发工具网站    京公网安备 11010802040832号  |  京ICP备19059560号-6
Copyright © 1998 - 2020 DevBox.CN. All Rights Reserved devBox.cn 开发工具箱 版权所有