Tiểu Luận Invertibility of random matrices norm of the inverse

Thảo luận trong 'Khảo Cổ Học' bắt đầu bởi Thúy Viết Bài, 5/12/13.

  1. Thúy Viết Bài

    Thành viên vàng

    Bài viết:
    198,891
    Được thích:
    173
    Điểm thành tích:
    0
    Xu:
    0Xu
    Invertibility of random matrices:
    norm of the inverse
    By Mark Rudelson*
    Abstract
    Let A be an n  n matrix, whose entries are independent copies of a
    centered random variable satisfying the subgaussian tail estimate. We prove
    that the operator norm of A
    1
    does not exceed Cn
    3=2
    with probability close
    to 1.
    1. Introduction
    Let A be an n  n matrix, whose entries are independent, identically
    distributed random variables. The spectral properties of such matrices, in
    particular invertibility, have been extensively studied (see, e.g. [M] and the
    survey [DS]). While A is almost surely invertible whenever its entries are
    absolutely continuous, the case of discrete entries is highly nontrivial. Even in
    the case, when the entries of A are independent random variables taking values
    1 with probability 1=2, the precise order of probability that A is singular is
    unknown. Komlos [K1], [K2] proved that this probability is o(1) as n ! 1.
    This result was improved by Kahn, Komlos and Szemeredi [KKS], who showed
    that this probability is bounded above by 
    n
    for some absolute constant  < 1.
    The value of  has been recently improved in a series of papers by Tao and Vu
    [TV1], [TV2] to  = 3=4 + o(1) (the conjectured value is  = 1=2 + o(1)).
    However, these papers do not address the quantitative characterization of
    invertibility, namely the norm of the inverse matrix, considered as an operator
    from R
    n
    to R
    n
    . Random matrices are one of the standard tools in geometric
    functional analysis. They are used, in particular, to estimate the Banach-Mazur distance between nite-dimensional Banach spaces and to construct
    sections of convex bodies possessing certain properties. In all these questions
    condition number or the distortion kAk  A
    1
    plays the crucial role. Since
    the norm of A is usually highly concentrated, the distortion is determined by
    the norm of A
    1
    . The estimate of the norm of A
    1
    is known only in the case
    *Research was supported in part by NSF grant DMS-024380.
    576 MARK RUDELSON
    when A is a matrix with independent N (0; 1) Gaussian entries. In this case
    Edelman [Ed] and Szarek [Sz2] proved that A
    1
     c
    p
    n with probability
    close to 1 (see also [Sz1] where the spectral properties of a Gaussian matrix
    are applied to an important question from geometry of Banach spaces). For
    other random matrices, including a random 1 matrix, even a polynomial
    bound was unknown. Proving such a polynomial estimate is the main aim of
    this paper.
    More results are known about rectangular random matrices. Let be an
    N  n matrix, whose entries are independent random variables. If N > n,
    then such a matrix can be considered as a linear operator : R
    n
    ! Y , where
    Y = R
    n
    . If we consider a family
    n
    of such matrices with n=N ! for a
    xed constant > 1, then the norms of (N
    1=2
     n
    j
    Y
    )
    1
    converge a.s. to
    (1
    p
    )
    1
    , provided that the fourth moments of the entries are uniformly
    bounded [BY]. The random matrices for which n=N = 1 o(1) are considered
    in [LPRT]. If the entries of such a matrix satisfy certain moment conditions
    and n=N > 1 c= log n, then ( j
    Y
    )
    1
     C(n=N )  n
    1=2
    with probability
    exponentially close to 1.
    The proof of the last result is based on the "-net argument. To describe
    it we have to introduce some notation. For p  1 let B
    n
    p
    denote the unit ball
    of the Banach space `
    n
    p
    . Let E  R
    n
    and let B  R
    n
    be a convex symmetric
    body. Let " > 0. We say that a set F  R
    n
    is an "-net for E with respect to
    B if
    E 
    [
    x2F
    (x + "B ):
    The smallest cardinality of an "-net will be denoted by N (E;B; "). For a
    point x 2 R
    n
    , kxk stands for the standard Euclidean norm, and for a linear
    operator T : R
    n
    ! R
    m
    , kT k denotes the operator norm of T : `
    n
    2
    ! `
    m
    2
    . Let
    E  S
    n 1
    be a set such that for any xed x 2 E there is a good bound for
    the probability that k xk is small. We shall call such a bound the small ball
    probability estimate. If N (E;B
    n
    2
    ; ") is small, this bound implies that with high
    probability k xk is large for all x from an "-net for E. Then the approximation
    is used to derive that in this case k xk is large for all x 2 E. Finally, the
    sphere S
    n 1
    is partitioned in two sets for which the above method works. This
    argument is applicable because the small ball probability is controlled by a
    function of N , while the size of an "-net depends on n < N .
    The case of a square random matrix is more delicate. Indeed, in this case
    the small ball probability estimate is too weak to produce a nontrivial estimate
    for the probability that k xk is large for all points of an "-net. To overcome this
    diculty, we use the "-net argument for one part of the sphere and work with
    conditional probability on the other part. Also, we will need more elaborate
    small ball probability estimates than those employed in [LPRT]. To obtain
    INVERTIBILITY OF RANDOM MATRICES 577
    such estimates we use the method of Halasz, which lies in the foundation of
    the arguments of [KKS], [TV1], [TV2].
    Let P (
    ) denote the probability of the event
    , and let E denote the ex-pectation of the random variable  . A random variable is called subgaussian
    if for any t > 0
    (1.1) P (j j > t)  C exp( ct
    2
    ):
    The class of subgaussian variables includes many natural types of random
    variables, in particular, normal and bounded ones. It is well-known that the tail
    decay condition (1.1) is equivalent to the moment condition Ej j
    p
    
    1=p
     C
    0
    p
    p
    for all p  1.
    The letters c; C; C
    0
    etc. denote unimportant absolute constants, whose
    value may change from line to line. Besides these constants, the paper contains
    many absolute constants which are used throughout the proof. For the reader's
    convenience we use a standard notation for such important absolute constants.
    Namely, if a constant appears in the formulation of Lemma or Theorem x.y,
    we denote it Cx.y
    or c
    x.y
    .
    The main result of this paper is the polynomial bound for the norm of
    A
    1
    . We shall formulate it in terms of the smallest singular number of A:
    s
    n
    (A) = min
    x2S
    n 1
    kAxk :
    Note that if the matrix A is invertible, then A
    1
    = 1=s
    n
    (A).
    Theorem 1.1. Let be a centered subgaussian random variable of vari-ance 1. Let A be an n  n matrix whose entries are independent copies of .
    Then for any " > c
    1.1
    =
    p
    n
    P
    
    9 x 2 S
    n 1
    j kAxk <
    "
    C1.1
     n
    3=2
    
    < "
    if n is large enough.
    More precisely, we prove that the probability above is bounded by "=2 +
    4 exp( cn) for all n 2 N.
    The inequality in Theorem 1.1 means that A
    1
     C1.1
     n
    3=2
    =" with
    probability greater than 1 ". Equivalently, the smallest singular number of
    A is at least "=(C1.1
     n
    3=2
    ).
    An important feature of Theorem 1.1 is its universality. Namely, the
    probability estimate holds for all subgaussian random variables, regardless of
    their nature. Moreover, the only place where we use the assumption that is
    subgaussian is Lemma 3.3 below.
     

    Các file đính kèm:

Đang tải...