<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://www.vigyanwiki.in/index.php?action=history&amp;feed=atom&amp;title=Matrix_norm</id>
	<title>Matrix norm - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://www.vigyanwiki.in/index.php?action=history&amp;feed=atom&amp;title=Matrix_norm"/>
	<link rel="alternate" type="text/html" href="https://www.vigyanwiki.in/index.php?title=Matrix_norm&amp;action=history"/>
	<updated>2026-05-08T11:02:13Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.39.3</generator>
	<entry>
		<id>https://www.vigyanwiki.in/index.php?title=Matrix_norm&amp;diff=216282&amp;oldid=prev</id>
		<title>Manidh: 1 revision imported</title>
		<link rel="alternate" type="text/html" href="https://www.vigyanwiki.in/index.php?title=Matrix_norm&amp;diff=216282&amp;oldid=prev"/>
		<updated>2023-07-14T06:08:53Z</updated>

		<summary type="html">&lt;p&gt;1 revision imported&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en-GB&quot;&gt;
				&lt;td colspan=&quot;1&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;1&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 11:38, 14 July 2023&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-notice&quot; lang=&quot;en-GB&quot;&gt;&lt;div class=&quot;mw-diff-empty&quot;&gt;(No difference)&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
		<author><name>Manidh</name></author>
	</entry>
	<entry>
		<id>https://www.vigyanwiki.in/index.php?title=Matrix_norm&amp;diff=216281&amp;oldid=prev</id>
		<title>wikipedia&gt;Jags1111: often used in mathematical optimization to search for low-rank matrices.</title>
		<link rel="alternate" type="text/html" href="https://www.vigyanwiki.in/index.php?title=Matrix_norm&amp;diff=216281&amp;oldid=prev"/>
		<updated>2023-06-15T06:00:56Z</updated>

		<summary type="html">&lt;p&gt;often used in mathematical optimization to search for low-rank matrices.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{short description|Norm on a vector space of matrices}}&lt;br /&gt;
{{For|the general concept|Norm (mathematics)}}&lt;br /&gt;
{{multiple issues|&lt;br /&gt;
{{lead too short|date=March 2023}}&lt;br /&gt;
{{context|date=March 2023}}}}&lt;br /&gt;
In [[mathematics]], a '''matrix norm''' is a [[vector norm]] in a [[vector space]] whose elements (vectors) are [[matrix (mathematics)|matrices]] (of given dimensions).&lt;br /&gt;
&lt;br /&gt;
== Preliminaries ==&lt;br /&gt;
&lt;br /&gt;
Given a [[field (mathematics)|field]] &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt; of either [[real number|real]] or [[complex number]]s, let &amp;lt;math&amp;gt;K^{m \times n}&amp;lt;/math&amp;gt; be the {{mvar|K}}-[[vector space]] of matrices with &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt; rows and &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; columns and entries in the field &amp;lt;math&amp;gt;K&amp;lt;/math&amp;gt;.  A matrix norm is a [[Norm (mathematics)|norm]] on &amp;lt;math&amp;gt;K^{m \times n}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This article will always write such norms with [[double vertical bar]]s (like so: &amp;lt;math&amp;gt;\|A\|&amp;lt;/math&amp;gt;).  Thus, the matrix norm is a [[Function (mathematics)|function]] &amp;lt;math&amp;gt;\|\cdot\| : K^{m \times n} \to \R&amp;lt;/math&amp;gt; that must satisfy the following properties:&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;{{Cite web| last=Weisstein| first=Eric W.| title=Matrix Norm| url=https://mathworld.wolfram.com/MatrixNorm.html| access-date=2020-08-24| website=mathworld.wolfram.com| language=en}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;{{Cite web|title=Matrix norms |url=http://fourier.eng.hmc.edu/e161/lectures/algebra/node12.html| access-date=2020-08-24| website=fourier.eng.hmc.edu}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For all scalars &amp;lt;math&amp;gt;\alpha \in K&amp;lt;/math&amp;gt; and matrices &amp;lt;math&amp;gt;A, B \in K^{m \times n}&amp;lt;/math&amp;gt;,&lt;br /&gt;
*&amp;lt;math&amp;gt;\|A\|\ge 0&amp;lt;/math&amp;gt; (''positive-valued'')&lt;br /&gt;
*&amp;lt;math&amp;gt;\|A\|= 0 \iff A=0_{m,n}&amp;lt;/math&amp;gt; (''definite'')&lt;br /&gt;
*&amp;lt;math&amp;gt;\left\|\alpha A\right\|=\left|\alpha\right| \left\|A\right\|&amp;lt;/math&amp;gt; (''absolutely homogeneous'')&lt;br /&gt;
*&amp;lt;math&amp;gt;\|A+B\| \le \|A\|+\|B\|&amp;lt;/math&amp;gt; (''sub-additive'' or satisfying the ''triangle inequality'')&lt;br /&gt;
&lt;br /&gt;
The only feature distinguishing matrices from rearranged vectors is [[matrix multiplication|multiplication]].  Matrix norms are particularly useful if they are also '''sub-multiplicative''':&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;{{Cite journal|last=Malek-Shahmirzadi |first=Massoud |date=1983|title=A characterization of certain classes of matrix norms |journal=Linear and Multilinear Algebra|language=en |volume=13 | issue=2 |pages=97–99 | doi=10.1080/03081088308817508| issn=0308-1087}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\left\|AB\right\| \le \left\|A\right\| \left\|B\right\| &amp;lt;/math&amp;gt;&amp;lt;ref group=&amp;quot;Note&amp;quot;&amp;gt;The condition only applies when the product is defined, such as the case of [[Square matrix|square matrices]] ({{math|1=''m'' = ''n''}}).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Every norm on {{math|''K''&amp;lt;sup&amp;gt;''n''×''n''&amp;lt;/sup&amp;gt;}} can be rescaled to be sub-multiplicative; in some books, the terminology ''matrix norm'' is reserved for sub-multiplicative norms.&amp;lt;ref&amp;gt;{{Cite book|last=Horn|first=Roger A. | title=Matrix analysis |date=2012 | publisher=Cambridge University Press | others=Johnson, Charles R.| isbn=978-1-139-77600-4 |edition=2nd |location=Cambridge |pages=340–341 |oclc=817236655}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Matrix norms induced by vector norms==&lt;br /&gt;
{{Main|Operator norm}}&lt;br /&gt;
Suppose a [[vector norm]] &amp;lt;math&amp;gt;\|\cdot\|_{\alpha}&amp;lt;/math&amp;gt; on &amp;lt;math&amp;gt;K^n&amp;lt;/math&amp;gt; and a vector norm &amp;lt;math&amp;gt;\|\cdot\|_{\beta}&amp;lt;/math&amp;gt; on &amp;lt;math&amp;gt;K^m&amp;lt;/math&amp;gt; are given. Any &amp;lt;math&amp;gt;m \times n&amp;lt;/math&amp;gt; matrix {{mvar|A}} induces a linear operator from &amp;lt;math&amp;gt;K^n&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;K^m&amp;lt;/math&amp;gt; with respect to the standard basis, and one defines the corresponding ''induced norm'' or ''[[operator norm]]'' or ''subordinate norm'' on the space &amp;lt;math&amp;gt;K^{m \times n}&amp;lt;/math&amp;gt; of all &amp;lt;math&amp;gt;m \times n&amp;lt;/math&amp;gt; matrices as follows:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \begin{align}&lt;br /&gt;
\|A\|_{\alpha,\beta} &lt;br /&gt;
&amp;amp;= \sup\{\|Ax\|_\beta : x\in K^n \text{ with }\|x\|_\alpha = 1\} \\&lt;br /&gt;
&amp;amp;= \sup\left\{\frac{\|Ax\|_\beta}{\|x\|_\alpha} : x\in K^n \text{ with } x\ne 0\right\}.&lt;br /&gt;
\end{align} &amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt; \sup &amp;lt;/math&amp;gt; denotes the [[Infimum and supremum|supremum]]. This norm measures how much the mapping induced by &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; can stretch vectors.&lt;br /&gt;
Depending on the vector norms &amp;lt;math&amp;gt;\|\cdot\|_{\alpha}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\|\cdot\|_{\beta}&amp;lt;/math&amp;gt; used, notation other than &amp;lt;math&amp;gt;\|\cdot\|_{\alpha,\beta}&amp;lt;/math&amp;gt; can be used for the operator norm.&lt;br /&gt;
&lt;br /&gt;
===Matrix norms induced by vector p-norms===&lt;br /&gt;
If the [[Vector norm#p-norm|''p''-norm for vectors]] (&amp;lt;math&amp;gt;1 \leq p \leq \infty&amp;lt;/math&amp;gt;) is used for both spaces &amp;lt;math&amp;gt;K^n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;K^m&amp;lt;/math&amp;gt;, then the corresponding operator norm is:&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \|A\|_p = \sup_{x \ne 0} \frac{\| A x\| _p}{\|x\|_p}. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These induced norms are different from the [[#&amp;quot;Entrywise&amp;quot; matrix norms|&amp;quot;entry-wise&amp;quot;]] ''p''-norms and the [[Schatten norm|Schatten ''p''-norms]] for matrices treated below, which are also usually denoted by &amp;lt;math&amp;gt; \|A\|_p .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the special cases of &amp;lt;math&amp;gt;p = 1, \infty&amp;lt;/math&amp;gt;, the induced matrix norms can be computed or estimated by&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \|A\|_1 = \max_{1 \leq j \leq n} \sum_{i=1}^m | a_{ij} |, &amp;lt;/math&amp;gt;&lt;br /&gt;
which is simply the maximum absolute column sum of the matrix;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \|A\|_\infty = \max_{1 \leq i \leq m} \sum _{j=1}^n | a_{ij} |, &amp;lt;/math&amp;gt;&lt;br /&gt;
which is simply the maximum absolute row sum of the matrix.&lt;br /&gt;
&lt;br /&gt;
For example, for&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;A = \begin{bmatrix} -3 &amp;amp; 5 &amp;amp; 7 \\ 2 &amp;amp; 6 &amp;amp; 4 \\ 0 &amp;amp; 2 &amp;amp; 8 \\ \end{bmatrix},&amp;lt;/math&amp;gt;&lt;br /&gt;
we have that&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\|A\|_1 = \max(|{-3}|+2+0; 5+6+2; 7+4+8) = \max(5,13,19) = 19,&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\|A\|_\infty = \max(|{-3}|+5+7; 2+6+4;0+2+8) = \max(15,12,10) = 15.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{anchor|Spectral norm}}&lt;br /&gt;
In the special case of &amp;lt;math&amp;gt;p = 2&amp;lt;/math&amp;gt; (the [[Euclidean norm]] or &amp;lt;math&amp;gt;\ell_2&amp;lt;/math&amp;gt;-norm for vectors), the induced matrix norm is the ''spectral norm''.  (The two values do ''not'' coincide in infinite dimensions &amp;amp;mdash; see [[Spectral radius]] for further discussion.)  The spectral norm of a matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is the largest [[singular value]] of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; (i.e., the square root of the largest [[eigenvalue]] of the matrix &amp;lt;math&amp;gt;A^*A&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;A^*&amp;lt;/math&amp;gt; denotes the [[conjugate transpose]] of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;):&amp;lt;ref&amp;gt;Carl D. Meyer, Matrix Analysis and Applied Linear Algebra, §5.2, p.281, Society for Industrial &amp;amp; Applied Mathematics, June 2000.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \|A\|_2 = \sqrt{\lambda_{\max}\left(A^* A\right)} = \sigma_{\max}(A).&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\sigma_{\max}(A)&amp;lt;/math&amp;gt; represents the largest singular value of matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;. Also,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \| A^* A\|_2 = \| A A^* \|_2 = \|A\|_2^2&amp;lt;/math&amp;gt;&lt;br /&gt;
since &amp;lt;math&amp;gt;\| A^* A\|_2 = \sigma_{\max}(A^*A) = \sigma_{\max}(A)^2 = \|A\|^2_2&amp;lt;/math&amp;gt; and similarly &amp;lt;math&amp;gt;\|AA^*\|_2 = \|A\|^2_2&amp;lt;/math&amp;gt; by [[singular value decomposition]] (SVD). There is another important inequality:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \|A\| _2 = \sigma_{\max}(A) \leq \|A\|_{\rm F} = \left(\sum_{i=1}^m \sum_{j=1}^n |a_{ij}|^2\right)^{\frac{1}{2}} = \left(\sum_{k=1}^{\min(m,n)} \sigma_{k}^2\right)^{\frac{1}{2}},&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\|A\|_\textrm{F}&amp;lt;/math&amp;gt; is the [[#Frobenius norm|Frobenius norm]]. Equality holds if and only if the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a rank-one matrix or a zero matrix. This inequality can be derived from the fact that the trace of a matrix is equal to the sum of its eigenvalues.&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;p=2&amp;lt;/math&amp;gt; we have an equivalent definition for &amp;lt;math&amp;gt;\|A\|_2&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;\sup\{x^T A y : x,y \in K^n \text{ with }\|x\|_2 = \|y\|_2 = 1\}&amp;lt;/math&amp;gt;. It can be shown to be equivalent to the above definitions using the [[Cauchy–Schwarz inequality]].&lt;br /&gt;
&lt;br /&gt;
===Matrix norms induced by vector α- and β- norms===&lt;br /&gt;
Suppose vector norms &amp;lt;math&amp;gt;\|\cdot\|_{\alpha}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\|\cdot\|_{\beta}&amp;lt;/math&amp;gt; are used for spaces &amp;lt;math&amp;gt;K^n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;K^m&amp;lt;/math&amp;gt; respectively, the corresponding operator norm is:&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \begin{align}&lt;br /&gt;
\|A\|_{\alpha,\beta} &lt;br /&gt;
&amp;amp;= \sup\{\|Ax\|_\beta : x\in K^n \text{ with }\|x\|_\alpha = 1\}.&lt;br /&gt;
\end{align} &amp;lt;/math&amp;gt;In the special cases of &amp;lt;math&amp;gt;\alpha = 2&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta=\infty&amp;lt;/math&amp;gt;, the induced matrix norms can be computed by&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \|A\|_{2,\infty}= \max_{1\le i\le m}\|A_{i:}\|_2, &amp;lt;/math&amp;gt;where &amp;lt;math&amp;gt;A_{i:}&amp;lt;/math&amp;gt; is the i-th row of matrix &amp;lt;math&amp;gt; A &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In the special cases of &amp;lt;math&amp;gt;\alpha = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta=2&amp;lt;/math&amp;gt;, the induced matrix norms can be computed by&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \|A\|_{1, 2} = \max_{1\le j\le n}\|A_{:j}\|_2, &amp;lt;/math&amp;gt;where &amp;lt;math&amp;gt;A_{:j}&amp;lt;/math&amp;gt; is the j-th column of matrix &amp;lt;math&amp;gt; A &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Hence, &amp;lt;math&amp;gt; \|A\|_{2,\infty} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; \|A\|_{1, 2} &amp;lt;/math&amp;gt; are the maximum row and column 2-norm of the matrix, respectively.&lt;br /&gt;
&lt;br /&gt;
===Properties===&lt;br /&gt;
&lt;br /&gt;
Any operator norm is [[#Consistent and compatible norms|consistent]]  with the vector norms that induce it, giving&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\|Ax\|_\beta \leq \|A\|_{\alpha,\beta}\|x\|_\alpha.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Suppose &amp;lt;math&amp;gt;\|\cdot\|_{\alpha,\beta}&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;\|\cdot\|_{\beta,\gamma}&amp;lt;/math&amp;gt;; and &amp;lt;math&amp;gt;\|\cdot\|_{\alpha,\gamma}&amp;lt;/math&amp;gt; are operator norms induced by the respective pairs of vector norms &amp;lt;math&amp;gt;(\|\cdot\|_{\alpha}, \|\cdot\|_{\beta})&amp;lt;/math&amp;gt;; &amp;lt;math&amp;gt;(\|\cdot\|_{\beta}, \|\cdot\|_{\gamma})&amp;lt;/math&amp;gt;; and &amp;lt;math&amp;gt;(\|\cdot\|_{\alpha}, \|\cdot\|_{\gamma})&amp;lt;/math&amp;gt;.  Then,&lt;br /&gt;
:&amp;lt;math&amp;gt;\|AB\|_{\alpha,\gamma} \leq \|A\|_{\beta, \gamma} \|B\|_{\alpha, \beta} ;&amp;lt;/math&amp;gt;&lt;br /&gt;
this follows from&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\|ABx\|_{\gamma} \leq \|A\|_{\beta, \gamma} \|Bx\|_{\beta} \leq \|A\|_{\beta, \gamma} \|B\|_{\alpha, \beta} \|x\|_{\alpha}&amp;lt;/math&amp;gt;&lt;br /&gt;
and&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\sup_{\|x\|_\alpha = 1} \|ABx \|_{\gamma} = \|AB\|_{\alpha, \gamma} .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Square matrices===&lt;br /&gt;
Suppose &amp;lt;math&amp;gt;\|\cdot\|_{\alpha, \alpha}&amp;lt;/math&amp;gt; is an operator norm on the space of square matrices &amp;lt;math&amp;gt;K^{n \times n}&amp;lt;/math&amp;gt;&lt;br /&gt;
induced by vector norms &amp;lt;math&amp;gt;\|\cdot\|_{\alpha}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\|\cdot\|_\alpha&amp;lt;/math&amp;gt;.&lt;br /&gt;
Then, the operator norm is a sub-multiplicative matrix norm: &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\|AB\|_{\alpha, \alpha} \leq \|A\|_{\alpha, \alpha} \|B\|_{\alpha, \alpha}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Moreover, any such norm satisfies the inequality&lt;br /&gt;
{{NumBlk||&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;(\|A^r\|_{\alpha, \alpha})^{1/r} \ge \rho(A) &amp;lt;/math&amp;gt;  | {{EquationRef|1}}}}&lt;br /&gt;
for all positive integers ''r'', where {{math|''ρ''(''A'')}} is the [[spectral radius]] of {{mvar|A}}. For [[Symmetric matrix|symmetric]] or [[Hermitian matrix|hermitian]] {{mvar|A}}, we have equality in ({{EquationNote|1}}) for the 2-norm, since in this case the 2-norm ''is'' precisely the spectral radius of {{mvar|A}}. For an arbitrary matrix, we may not have equality for any norm; a counterexample would be&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;A = \begin{bmatrix} 0 &amp;amp; 1 \\ 0 &amp;amp; 0 \end{bmatrix},&amp;lt;/math&amp;gt;&lt;br /&gt;
which has vanishing spectral radius. In any case, for any matrix norm, we have the [[Spectral radius#Gelfand's formula|spectral radius formula]]:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\lim_{r\to\infty}\|A^r\|^{1/r}=\rho(A). &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Consistent and compatible norms==&lt;br /&gt;
A matrix norm &amp;lt;math&amp;gt;\| \cdot \|&amp;lt;/math&amp;gt; on &amp;lt;math&amp;gt;K^{m \times n}&amp;lt;/math&amp;gt; is called ''consistent'' with a vector norm &amp;lt;math&amp;gt;\| \cdot \|_{\alpha}&amp;lt;/math&amp;gt; on &amp;lt;math&amp;gt;K^n&amp;lt;/math&amp;gt; and a vector norm &amp;lt;math&amp;gt;\| \cdot \|_{\beta}&amp;lt;/math&amp;gt; on &amp;lt;math&amp;gt;K^m&amp;lt;/math&amp;gt;, if:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\left\|Ax\right\|_{\beta} \leq \left\|A\right\| \left\|x\right\|_{\alpha}&amp;lt;/math&amp;gt;&lt;br /&gt;
for all &amp;lt;math&amp;gt;A \in K^{m \times n}&amp;lt;/math&amp;gt; and all &amp;lt;math&amp;gt;x \in K^n&amp;lt;/math&amp;gt;.  In the special case of {{math|1=''m'' = ''n''}} and &amp;lt;math&amp;gt;\alpha = \beta&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\| \cdot \|&amp;lt;/math&amp;gt; is also called ''compatible'' with &amp;lt;math&amp;gt;\|\cdot \|_{\alpha}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
All induced norms are consistent by definition.  Also, any sub-multiplicative matrix norm on &amp;lt;math&amp;gt; K^{n \times n} &amp;lt;/math&amp;gt; induces a compatible vector norm on &amp;lt;math&amp;gt;K^n&amp;lt;/math&amp;gt; by defining &amp;lt;math&amp;gt; \left\| v \right\| := \left\| \left( v, v, \dots, v \right) \right\| &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==&amp;quot;Entry-wise&amp;quot; matrix norms==&lt;br /&gt;
These norms treat an &amp;lt;math&amp;gt; m \times n &amp;lt;/math&amp;gt; matrix as a vector of size &amp;lt;math&amp;gt; m \cdot n &amp;lt;/math&amp;gt;, and use one of the familiar vector norms. For example, using the ''p''-norm for vectors, {{nowrap|''p'' ≥ 1}}, we get:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\| A \|_{p,p} = \| \mathrm{vec}(A) \|_p = \left( \sum_{i=1}^m \sum_{j=1}^n |a_{ij}|^p \right)^{1/p}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is a different norm from the induced ''p''-norm (see above) and the Schatten ''p''-norm (see below), but the notation is the same.&lt;br /&gt;
&lt;br /&gt;
The special case ''p'' = 2 is the Frobenius norm, and ''p'' = &amp;amp;infin; yields the maximum norm.&lt;br /&gt;
&lt;br /&gt;
==={{math|''L''&amp;lt;sub&amp;gt;2,1&amp;lt;/sub&amp;gt;}} and {{math|''L&amp;lt;sub&amp;gt;p,q&amp;lt;/sub&amp;gt;''}} norms===&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;(a_1, \ldots, a_n) &amp;lt;/math&amp;gt; be the columns of matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;. From the original definition, the matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; presents n data points in m-dimensional space. The &amp;lt;math&amp;gt;L_{2,1}&amp;lt;/math&amp;gt; norm&amp;lt;ref&amp;gt;{{cite conference | last1=Ding | first1=Chris | last2=Zhou | first2=Ding | last3=He | first3=Xiaofeng | last4=Zha | first4=Hongyuan | title=R1-PCA: Rotational Invariant L1-norm Principal Component Analysis for Robust Subspace Factorization | book-title=Proceedings of the 23rd International Conference on Machine Learning | series=ICML '06 |date=June 2006 | isbn=1-59593-383-2 | location=Pittsburgh, Pennsylvania, USA | pages=281–288 | doi=10.1145/1143844.1143880 | publisher=ACM }}&amp;lt;/ref&amp;gt; is the sum of the Euclidean norms of the columns of the matrix:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\| A \|_{2,1} = \sum_{j=1}^n \| a_{j} \|_2 = \sum_{j=1}^n \left( \sum_{i=1}^m |a_{ij}|^2 \right)^{\frac{1}{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;math&amp;gt;L_{2,1}&amp;lt;/math&amp;gt; norm as an error function is more robust, since the error for each data point (a column) is not squared. It is used in [[robust data analysis]] and [[sparse coding]].&lt;br /&gt;
&lt;br /&gt;
For {{nowrap|''p'', ''q'' ≥ 1}}, the &amp;lt;math&amp;gt;L_{2,1}&amp;lt;/math&amp;gt; norm can be generalized to the &amp;lt;math&amp;gt;L_{p,q}&amp;lt;/math&amp;gt; norm as follows:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\| A \|_{p,q} =  \left(\sum_{j=1}^n \left( \sum_{i=1}^m |a_{ij}|^p \right)^{\frac{q}{p}}\right)^{\frac{1}{q}}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Frobenius norm===&lt;br /&gt;
{{Main|Hilbert–Schmidt operator}}&lt;br /&gt;
{{see also|Frobenius inner product}}&lt;br /&gt;
&lt;br /&gt;
When {{nowrap|1=''p'' = ''q'' = 2}} for the &amp;lt;math&amp;gt;L_{p,q}&amp;lt;/math&amp;gt; norm, it is called the '''Frobenius norm''' or the '''Hilbert–Schmidt norm''', though the latter term is used more frequently in the context of operators on (possibly infinite-dimensional) [[Hilbert space]]. This norm can be defined in various ways:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\|A\|_\text{F} = \sqrt{\sum_{i}^m\sum_{j}^n |a_{ij}|^2} = \sqrt{\operatorname{trace}\left(A^* A\right)} = \sqrt{\sum_{i=1}^{\min\{m, n\}} \sigma_i^2(A)},&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\sigma_i(A)&amp;lt;/math&amp;gt; are the [[singular value]]s of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt;. Recall that the [[trace (matrix)|trace function]] returns the sum of diagonal entries of a square matrix.&lt;br /&gt;
&lt;br /&gt;
The Frobenius norm is an extension of the Euclidean norm to &amp;lt;math&amp;gt;K^{n \times n}&amp;lt;/math&amp;gt; and comes from the [[Frobenius inner product]] on the space of all matrices.&lt;br /&gt;
&lt;br /&gt;
The Frobenius norm is sub-multiplicative and is very useful for [[numerical linear algebra]]. The sub-multiplicativity of Frobenius norm can be proved using [[Cauchy–Schwarz inequality]].&lt;br /&gt;
&lt;br /&gt;
Frobenius norm is often easier to compute than induced norms, and has the useful property of being invariant under [[rotation matrix|rotations]] (and [[Unitary operator|unitary]] operations in general). That is, &amp;lt;math&amp;gt;\|A\|_\text{F} = \|AU\|_\text{F} = \|UA\|_\text{F}&amp;lt;/math&amp;gt; for any unitary matrix &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;. This property follows from the cyclic nature of the trace (&amp;lt;math&amp;gt;\operatorname{trace}(XYZ) = \operatorname{trace}(ZXY)&amp;lt;/math&amp;gt;):&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\|AU\|_\text{F}^2 = \operatorname{trace}\left( (AU)^{*}A U \right)&lt;br /&gt;
  = \operatorname{trace}\left( U^{*} A^{*}A U \right)&lt;br /&gt;
  = \operatorname{trace}\left( UU^{*} A^{*}A \right)&lt;br /&gt;
  = \operatorname{trace}\left( A^{*} A \right)&lt;br /&gt;
  = \|A\|_\text{F}^2,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and analogously:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\|UA\|_\text{F}^2 = \operatorname{trace}\left( (UA)^{*}UA \right)&lt;br /&gt;
  = \operatorname{trace}\left( A^{*} U^{*} UA  \right)&lt;br /&gt;
  = \operatorname{trace}\left( A^{*}A \right)&lt;br /&gt;
  = \|A\|_\text{F}^2,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where we have used the unitary nature of &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; (that is, &amp;lt;math&amp;gt;U^* U = U U^* = \mathbf{I}&amp;lt;/math&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
It also satisfies&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\|A^* A\|_\text{F} = \|AA^*\|_\text{F} \leq \|A\|_\text{F}^2&amp;lt;/math&amp;gt;&lt;br /&gt;
and &lt;br /&gt;
:&amp;lt;math&amp;gt;\|A + B\|_\text{F}^2 = \|A\|_\text{F}^2 + \|B\|_\text{F}^2 + 2 Re \left( \langle A, B \rangle_\text{F} \right),&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\langle A, B \rangle_\text{F}&amp;lt;/math&amp;gt; is the [[Frobenius inner product]], and Re is the real part of a complex number (irrelevant for real matrices)&lt;br /&gt;
&lt;br /&gt;
===Max norm===&lt;br /&gt;
&lt;br /&gt;
The '''max norm''' is the elementwise norm in the limit as {{nowrap|1=''p'' = ''q''}} goes to infinity:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; \|A\|_{\max} = \max_{ij} |a_{ij}|. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This norm is not [[Matrix norm#Definition|sub-multiplicative]].&lt;br /&gt;
&lt;br /&gt;
Note that in some literature (such as [[Communication complexity]]), an alternative definition of max-norm, also called the &amp;lt;math&amp;gt;\gamma_2&amp;lt;/math&amp;gt;-norm, refers to the factorization norm:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; \gamma_2(A) = \min_{U,V: A = UV^T} \| U \|_{2,\infty} \| V \|_{2,\infty} =  \min_{U,V: A = UV^T} \max_{i,j} \| U_{i,:} \|_2 \| V_{j,:} \|_2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Schatten norms==&lt;br /&gt;
{{further|Schatten norm}}&lt;br /&gt;
&lt;br /&gt;
The Schatten ''p''-norms arise when applying the ''p''-norm to the vector of [[singular value decomposition|singular values]] of a matrix.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt; If the singular values of the &amp;lt;math&amp;gt;m \times n&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; are denoted by ''&amp;amp;sigma;&amp;lt;sub&amp;gt;i&amp;lt;/sub&amp;gt;'', then the Schatten ''p''-norm is defined by&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt; \|A\|_p = \left( \sum_{i=1}^{\min\{m,n\}} \sigma_{i}^p(A) \right)^{\frac{1}{p}}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These norms again share the notation with the induced and entry-wise ''p''-norms, but they are different.&lt;br /&gt;
&lt;br /&gt;
All Schatten norms are sub-multiplicative. They are also unitarily invariant, which means that &amp;lt;math&amp;gt;\|A\| = \|UAV\|&amp;lt;/math&amp;gt; for all matrices &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and all [[unitary matrix|unitary matrices]] &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most familiar cases are ''p'' = 1, 2, &amp;amp;infin;. The case ''p'' = 2 yields the Frobenius norm, introduced before. The case ''p'' = &amp;amp;infin; yields the spectral norm, which is the operator norm induced by the vector 2-norm (see above). Finally, ''p'' = 1 yields the '''nuclear norm''' (also known as the ''trace norm'', or the [[Singular Value Decomposition#Ky Fan norms|Ky Fan]] 'n'-norm&amp;lt;ref&amp;gt;{{Cite journal|last=Fan|first=Ky.|date=1951|title=Maximum properties and inequalities for the eigenvalues of completely continuous operators|journal=Proceedings of the National Academy of Sciences of the United States of America| volume=37|issue=11|pages=760–766|doi=10.1073/pnas.37.11.760|pmc=1063464|pmid=16578416|bibcode=1951PNAS...37..760F|doi-access=free}}&amp;lt;/ref&amp;gt;), defined as:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\|A\|_{*} = \operatorname{trace} \left(\sqrt{A^*A}\right) = \sum_{i=1}^{\min\{m,n\}} \sigma_{i}(A),&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\sqrt{A^*A}&amp;lt;/math&amp;gt; denotes a positive semidefinite matrix &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;BB=A^*A&amp;lt;/math&amp;gt;. More precisely, since &amp;lt;math&amp;gt;A^*A&amp;lt;/math&amp;gt; is a [[positive semidefinite matrix]], its [[square root of a matrix|square root]] is well-defined. The nuclear norm &amp;lt;math&amp;gt;\|A\|_{*}&amp;lt;/math&amp;gt; is a [[convex envelope]] of the rank function &amp;lt;math&amp;gt;\text{rank}(A)&amp;lt;/math&amp;gt;, so it is often used in [[mathematical optimization]] to search for low-rank matrices.&lt;br /&gt;
&lt;br /&gt;
Combining [[von Neumann's trace inequality]]&lt;br /&gt;
with [[Hölder's inequality]] for Euclidean space&lt;br /&gt;
yields a version of [[Hölder's inequality]]&lt;br /&gt;
for Schatten norms&lt;br /&gt;
for&lt;br /&gt;
&amp;lt;math&amp;gt; 1/p + 1/q = 1 &amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; |\operatorname{trace}(A'B)| \le \|A\|_p \|B\|_q, &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In particular, this implies the Schatten norm inequality&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \|A\|_F^2 \le \|A\|_p \|A\|_q. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Monotone norms==&lt;br /&gt;
A matrix norm &amp;lt;math&amp;gt;\|\cdot \|&amp;lt;/math&amp;gt; is called ''monotone'' if it is monotonic with respect to the [[Loewner order]]. Thus, a matrix norm is increasing if&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;A \preccurlyeq B \Rightarrow \|A\| \leq \|B\|.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Frobenius norm and spectral norm are examples of monotone norms.&amp;lt;ref&amp;gt;{{cite book |last1=Ciarlet |first1=Philippe G. |title=Introduction to numerical linear algebra and optimisation |date=1989 |publisher=Cambridge University Press |location=Cambridge, England |isbn=0521327881 |page=57}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Cut norms ==&lt;br /&gt;
Another source of inspiration for matrix norms arises from considering a matrix as the [[adjacency matrix]] of a [[Weighted graph|weighted]], [[directed graph]].&amp;lt;ref name=&amp;quot;FK&amp;quot;&amp;gt;{{Cite journal|last1=Frieze| first1=Alan| last2=Kannan|first2=Ravi| date=1999-02-01|title=Quick Approximation to Matrices and Applications| url=https://doi.org/10.1007/s004930050052| journal=Combinatorica|language=en| volume=19 |issue=2 |pages=175–220 |doi=10.1007/s004930050052 |s2cid=15231198 |issn=1439-6912}}&amp;lt;/ref&amp;gt;  The so-called &amp;quot;cut norm&amp;quot; measures how close the associated graph is to being [[Bipartite graph|bipartite]]:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\|A\|_{\Box}=\max_{S\subseteq[n], T\subseteq[m]}{\left|\sum_{s\in S,t\in T}{A_{t,s}}\right|}&amp;lt;/math&amp;gt; &lt;br /&gt;
where {{math|''A'' &amp;amp;isin; ''K''&amp;lt;sup&amp;gt;''m''×''n''&amp;lt;/sup&amp;gt;}}.&amp;lt;ref name=&amp;quot;FK&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;LNGL&amp;quot;&amp;gt;{{Cite book| last=Lovász László|title=Large Networks and Graph Limits |publisher=American Mathematical Society|year=2012| isbn=978-0-8218-9085-1 | series=AMS Colloquium Publications|volume=60| location=Providence, RI|pages=127–131 |chapter=The cut distance|author-link=László Lovász}}  Note that Lovász rescales {{math|‖''A''‖&amp;lt;sub&amp;gt;□&amp;lt;/sub&amp;gt;}} to lie in {{closed-closed|0, 1}}.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;AN&amp;quot;&amp;gt;{{Cite journal|last1=Alon |first1=Noga |author-link=Noga Alon| last2=Naor| first2=Assaf| date=2004-06-13| title=Approximating the cut-norm via Grothendieck's inequality | url=https://doi.org/10.1145/1007352.1007371 | journal=Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing | series=STOC '04 |location=Chicago, IL, USA | publisher=Association for Computing Machinery| pages=72–80| doi=10.1145/1007352.1007371 | isbn=978-1-58113-852-8 |s2cid=1667427}}&amp;lt;/ref&amp;gt;  Equivalent definitions (up to a constant factor) impose the conditions {{math|2{{abs|''S''}} &amp;gt; ''n'' &amp;amp;amp; 2{{abs|''T''}} &amp;gt; ''m''}}; {{math|1=''S'' = ''T''}}; or {{math|1=''S'' &amp;amp;cap; ''T'' = &amp;amp;emptyset;}}.&amp;lt;ref name=&amp;quot;LNGL&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The cut-norm is equivalent to the induced operator norm {{math|‖·‖&amp;lt;sub&amp;gt;&amp;amp;infin;→1&amp;lt;/sub&amp;gt;}}, which is itself equivalent to another norm, called the [[Grothendieck inequality|Grothendieck]] norm.&amp;lt;ref name=&amp;quot;AN&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To define the Grothendieck norm, first note that a linear operator {{Math|''K''&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt; → ''K''&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;}} is just a scalar, and thus extends to a linear operator on any {{Math|''K&amp;lt;sup&amp;gt;k&amp;lt;/sup&amp;gt;'' → ''K&amp;lt;sup&amp;gt;k&amp;lt;/sup&amp;gt;''}}.  Moreover, given any choice of basis for {{Math|''K&amp;lt;sup&amp;gt;n&amp;lt;/sup&amp;gt;''}} and {{Math|''K&amp;lt;sup&amp;gt;m&amp;lt;/sup&amp;gt;''}}, any linear operator {{Math|''K&amp;lt;sup&amp;gt;n&amp;lt;/sup&amp;gt;'' → ''K&amp;lt;sup&amp;gt;m&amp;lt;/sup&amp;gt;''}} extends to a linear operator {{Math|(''K''&amp;lt;sup&amp;gt;''k''&amp;lt;/sup&amp;gt;)&amp;lt;sup&amp;gt;''n''&amp;lt;/sup&amp;gt; → (''K''&amp;lt;sup&amp;gt;''k''&amp;lt;/sup&amp;gt;)&amp;lt;sup&amp;gt;''m''&amp;lt;/sup&amp;gt;}}, by letting each matrix element on elements of {{Math|''K&amp;lt;sup&amp;gt;k&amp;lt;/sup&amp;gt;''}} via scalar multiplication.  The Grothendieck norm is the norm of that extended operator; in symbols:&amp;lt;ref name=&amp;quot;AN&amp;quot; /&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\|A\|_{G,k}=\sup_{\text{each } u_j, v_j\in K^k; \|u_j\| = \|v_j\| = 1}{\sum_{j \in [n], l \in [m]}{(u_j\cdot v_j)A_{l,j}}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Grothendieck norm depends on choice of basis (usually taken to be the [[standard basis]]) and {{mvar|k}}.&lt;br /&gt;
&lt;br /&gt;
==Equivalence of norms==&lt;br /&gt;
{{See also|Equivalent norms}}&lt;br /&gt;
&lt;br /&gt;
For any two matrix norms &amp;lt;math&amp;gt;\|\cdot\|_{\alpha}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\|\cdot\|_{\beta}&amp;lt;/math&amp;gt;, we have that:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;r\|A\|_\alpha\leq\|A\|_\beta\leq s\|A\|_\alpha&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for some positive numbers ''r'' and ''s'', for all matrices &amp;lt;math&amp;gt;A\in K^{m \times n}&amp;lt;/math&amp;gt;. In other words, all norms on &amp;lt;math&amp;gt;K^{m \times n}&amp;lt;/math&amp;gt; are ''equivalent''; they induce the same [[topology (structure)|topology]] on &amp;lt;math&amp;gt;K^{m \times n}&amp;lt;/math&amp;gt;. This is true because the vector space &amp;lt;math&amp;gt;K^{m \times n}&amp;lt;/math&amp;gt; has the finite [[dimension (mathematics)|dimension]] &amp;lt;math&amp;gt;m \times n&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Moreover, for every vector norm &amp;lt;math&amp;gt;\|\cdot\|&amp;lt;/math&amp;gt; on &amp;lt;math&amp;gt;\R^{n\times n}&amp;lt;/math&amp;gt;, there exists a unique positive real number &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;\ell\|\cdot\|&amp;lt;/math&amp;gt; is a sub-multiplicative matrix norm for every &amp;lt;math&amp;gt;\ell \ge k&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A sub-multiplicative matrix norm &amp;lt;math&amp;gt;\|\cdot\|_{\alpha}&amp;lt;/math&amp;gt; is said to be ''minimal'', if there exists no other sub-multiplicative matrix norm &amp;lt;math&amp;gt;\|\cdot\|_{\beta}&amp;lt;/math&amp;gt; satisfying &amp;lt;math&amp;gt;\|\cdot\|_{\beta} &amp;lt; \|\cdot\|_{\alpha}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Examples of norm equivalence===&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\|A\|_p&amp;lt;/math&amp;gt; once again refer to the norm induced by the vector ''p''-norm (as above in the Induced Norm section).&lt;br /&gt;
&lt;br /&gt;
For matrix &amp;lt;math&amp;gt;A\in\R^{m\times n}&amp;lt;/math&amp;gt; of [[Rank (linear algebra)|rank]] &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt;, the following inequalities hold:&amp;lt;ref&amp;gt;&lt;br /&gt;
[[Gene Golub|Golub, Gene]]; [[Charles Van Loan|Charles F. Van Loan]] (1996). Matrix Computations – Third Edition. Baltimore: The Johns Hopkins University Press, 56–57. {{ISBN|0-8018-5413-X}}.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Roger Horn and Charles Johnson. ''Matrix Analysis,'' Chapter 5, Cambridge University Press, 1985. {{ISBN|0-521-38632-2}}.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;math&amp;gt;\|A\|_2\le\|A\|_F\le\sqrt{r}\|A\|_2&amp;lt;/math&amp;gt;&lt;br /&gt;
*&amp;lt;math&amp;gt;\|A\|_F \le \|A\|_{*} \le \sqrt{r} \|A\|_F&amp;lt;/math&amp;gt;&lt;br /&gt;
*&amp;lt;math&amp;gt;\|A\|_{\max} \le \|A\|_2 \le \sqrt{mn}\|A\|_{\max}&amp;lt;/math&amp;gt;&lt;br /&gt;
*&amp;lt;math&amp;gt;\frac{1}{\sqrt{n}}\|A\|_\infty\le\|A\|_2\le\sqrt{m}\|A\|_\infty&amp;lt;/math&amp;gt;&lt;br /&gt;
*&amp;lt;math&amp;gt;\frac{1}{\sqrt{m}}\|A\|_1\le\|A\|_2\le\sqrt{n}\|A\|_1.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Dual norm]]&lt;br /&gt;
* [[Logarithmic norm]]&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
{{reflist|group=Note}}&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
==Bibliography==&lt;br /&gt;
* [[James W. Demmel]], Applied Numerical Linear Algebra, section 1.7, published by SIAM, 1997.&lt;br /&gt;
* Carl D. Meyer, Matrix Analysis and Applied Linear Algebra, published by SIAM, 2000. [http://www.matrixanalysis.com]&lt;br /&gt;
* [[John Watrous (computer scientist)|John Watrous]], Theory of Quantum Information, [https://web.archive.org/web/20160304053759/https://cs.uwaterloo.ca/~watrous/CS766/LectureNotes/02.pdf 2.3 Norms of operators], lecture notes, University of Waterloo, 2011.&lt;br /&gt;
* [[Kendall Atkinson]], An Introduction to Numerical Analysis, published by John Wiley &amp;amp; Sons, Inc 1989&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Categories--&amp;gt;&lt;br /&gt;
[[Category:Norms (mathematics)]]&lt;br /&gt;
[[Category:Linear algebra]]&lt;/div&gt;</summary>
		<author><name>wikipedia&gt;Jags1111</name></author>
	</entry>
</feed>