A spectral theorem for matrices over fields of power series

Let K = I~’ = R((tl, ... , tm)) be a field of formal power series in one or several variables with real coefficients. We prove that every symmetric square matrix A E Matn(K) can be diagonalized by means of an orthogonal matrix U E Matn(K). Our proof is based on a recursive construction and prepares the way for effectively computing the transition matrix U (and therefore the eigenvalues of A and their multiplicities). The result carries over to certain Henselian fields of power series in infinitely many variables. 1991 Mathematics subject classification: I~~I25, 15A99 Introduction. The most prominent result in the theory of real or complex matrices is the Spectral Theorem which says that every symmetric [resp. hermitian] square matrix can be put into diagonal form by means of an orthogonal [resp. a unitary] matrix. The farreaching applications of this result and its generalization to bounded linear operators on infinite-dimensional Hilbert spaces have been intensively studied. However, little is known about analogous decompositions of matrices with entries in more general fields. Diarra [2] showed that symmetric matrices over fields of p-adic numbers cannot be diagonalized in general. In turn, Adkins [1] proved a theorem on diagonalization of matrices with entries in discrete hermitian rings. In the present paper we consider fields ~i’ = R((tl, ... , tm)) of formal power series in one or several variables with real coefficients. Our main result states that over these fields K every symmetric matrix can be orthogonally diagonalized. We shall show that this result even carries over to fields of generalized power series in infinitely many variables. In the classical case of a real symmetrix matrix A the diagonalization is obtained by computing the characteristic polynomial of A and using the fact that the quadratic extension C = is algebraically closed. In the present case this way of reasoning

Introduction. The most prominent result in the theory of real or complex matrices is the Spectral Theorem which says that every symmetric [resp. hermitian] square matrix can be put into diagonal form by means of an orthogonal [resp. a unitary] matrix. The farreaching applications of this result and its generalization to bounded linear operators on infinite-dimensional Hilbert spaces have been intensively studied. However, little is known about analogous decompositions of matrices with entries in more general fields. Diarra [2] showed that symmetric matrices over fields of p-adic numbers cannot be diagonalized in general. In turn, Adkins [1] proved a theorem on diagonalization of matrices with entries in discrete hermitian rings.
In the present paper we consider fields ~i' = R((tl, ... , tm)) of formal power series in one or several variables with real coefficients. Our main result states that over these fields K every symmetric matrix can be orthogonally diagonalized. We shall show that this result even carries over to fields of generalized power series in infinitely many variables. In the classical case of a real symmetrix matrix A the diagonalization is obtained by computing the characteristic polynomial of A and using the fact that the quadratic extension C = is algebraically closed. In the present case this way of reasoning fails. For, our fields K = R((tl, ... , tm)) are too far from being algebraically closed; in fact these fields admit finite extensions of any degree. Our method of proof combines two ideas and can be outlined as follows. First, write K = R((tl, ... , tm)) _ Ko((t)) where t = tm and Ko = R((tl, ... , t,~_~ )). The field K = Ko((t)) is complete with respect to a non-archimedian, discrete valuation. This allows us to represent a given symmetric matrix A with entries in K as a convergent power series A = Ao + + A2 . t2 + ... with coefficients Ak in a smaller matrix ring. Secondly, we shall set up a recursive construction that produces an orthogonal transition matrix U = {/o + Ui . t + U2 . t2 + ... such that UtrAU is decomposed into two blocks of smaller size. The proof is then finished by an easy induction.
It is a remarkable feature of this proof that it does not involve the spectrum. Indeed, the eigenvalues of A are obtained at the end as a by-product. Thus the proof is potentially a tool to study arithmetical properties of fields of power series.
We should like to mention that the paper has grown out of studies in the theory of orthomodular spaces. These are, by definition, vector spaces E endowed with a hermitean form $ such that the Projection Theorem holds for (E, ~~ : every orthogonally closed linear subspace U C E is a direct summand of the whole space. Classical examples are the Hilbert spaces over R or C and for a long time there were no others. Then, in 1980, numerous non-classical, infinite-dimensional orthomodular spaces were discovered. They are constructed over certain non-archimedian, complete fields; the valuations in question are of infinite rank. These new spaces carry a natural non-archimedian norm, so there is a notion of "bounded linear operator". The central question is whether a bounded, selfadjoint linear operator T : E -E always admits an orthogonal decomposition derived from its spectrum. By using the technique of reduction modulo residual spaces the task of decomposing an infinite-dimensional operator T : E --> E is seen to be closely related to the problem of decomposing finite matrices over fields of power series (or of rational functions). For details we refer to [3] and [4]. 1. Fields of power series.
Given any field Ko with char(K0) ~ 2 we let K = Ko((t)) be the field of formal power series in the indeterminate t with coefficients in Ko, and we let 03C6 : K ~ Z be the usual exponential valuation. Thus for a typical a = aiti in K we have = min~i E Z a= ~ 0} if a ~ 0, = oo if a = 0. The valued field (h',~) is henselian (cf. [4] There are at least two indices 0 i j m such that ~(a=~9') = for otherwise the terms on the lefthand side couldn't cancel. Since = = 0 it follows that 03C6(03B8i) = hence 03C6(03B8) = 0. Thus 19 E R. Applying now the epimorphism 03C0 : R -Ko to the above equality and noticing that = ai for all i we obtain ao + a103C0(03B8) + a2~('~)z +... + = 0, i.e. ~t(~9) E Ko is a root of the polynomial p(X). Since p(X) is irreducible this is possible only when m = 1. We conclude that 3 E Ko, as claimed.
2. Matrices over fields of power series. We consider the ring M atn(K) of all square matrices of size n x n with entries in K along with the subring Matn(K0) consisting of all matrices with entries in the subfield Ko C K. We shall denote the matrices in Matn(K) by A, B, ... U ... and those in Matn(K0) by A, B, ... , U .... The unit matrix is always denoted by f. A matrix A E Matn(K) is called orthogonal if its transpose A* is equal to the inverse ,A-~, i.e. if A*A = ,A,r4* = I, We say that A is diagonal if all entries outside the main diagonal are 0, more generally, we say that A is (r, nr )-blockdiagonal if it has the shape where B and C are square matrices of size r x r and (n -r) x (n -r) respectively. Our computations later on will rely on a representation of the elements A E Matn(K) as a formal power series with coefficients in the (non-commutative) subring Matn(K0 Let K = K0((t)) and n > 1. The following conditions are equivalent: (a) Every symmetric matrix A E Matn(Ko) can be diagonalized by means of an orthogonal matrix U E Matn(K0). (b ) Every symmetric matrix A E Matn(K) can be diagonalized by means of an orthogonal matrix U E Matn(K). The proof will be divided into several steps. We begin with the easy part.

03BBnn
Here the diagonal entries .Àii are the eigenvalues of the matrix A, that is, the roots of the characteristic polynomial = det(X . 1-A). Since the coefficients of pA(X ) belong to K0, the are algebraic over Ko. By Lemma 1 we conclude that 03BB11,..., 03BBnn E Ko. Consider an eigenvalue ai= and let m ; be its algebraic multiplicity. Then a=i is repeated m; times in D and consequently has rank n -m~ over K. But the matrix A -Àii' I is in Matn(Ko), so its rank over Ko is the same as its rank over K as can be easily seen by applying the Gaussian algorithm. Consequently 03BBii has geometric multiplicity mi. . Thus for each eigenvalue of A the algebraic and the geometric multiplicity coincide. This entails that there exists an orthogonal matrix U in Matn(K0) such that U*A U is diagonal, as asserted.
The substantial part is the converse implication to which we now turn.
4. Proof of the implication (a) ==~ (b). In this section we assume throughout that Ko is a coefficient field satisfying condition (a). Let there be given a symmetric matrix ,A ~ 0 in Matn(K). 4 The expansion of C starts with a coefhcient matrix Co = V *Am V that is diagonal but not a multiple of I. Moreover, if we succeed in finding an orthogonal matrix U E Matn(K) which diagonalizes C then V . U will provide a diagonalization of B and therefore also of Hence we may assume from the start that the initial coefficient of A satisfies (1).
We should like to point out that it is only in the above preliminary step where the hypothesis (a) is actually needed. However, the condition (a) can hardly be replaced by an assumption on the initial matrix Ao because it will be used repeatedly in the inductive argument at the end (see section 4.6).

4.2.
The idea is to construct recursively an orthogonal transition matrix that diagonalizes A. When trying to do so it turns out that the recursive computation of Uo, tll, U2, ... can be carried out provided the diagonal entries of Ao are pairwise different. However, when some diagonal entries of Ao are repeated there arise serious troubles. The underlying geometric reason for these obstacles in that in the second case the given matrix A may have multiple eigenvalues and consequently U is not uniquely determined by A. The way out of the difficulties is as follows: we shall not attempt to put the given matrix A into diagonal form at once, but we will first decompose A into blocks the sizes of which are determined by the multiplicities ocurring in Ao.The clue is given by the following result.  (1). Then there ezist an integer r with 1 r n -1 along with an orthogonal matrix U E Matn(K) such that U*AU is (r, nr)..blockdiagonal. The proof will be divided into several steps and will cover the next three subsections. Since Ao is not a multiple of I the multiplicity r of all is strictly less than n. After conjugating by some permutation matrix we may assume that a,, = a11 for 1 z r, a11 for r + 1 ~ z ~ n.
The multiplicity r is the number r referred to in the statement of Lemma 2.

4.4.
We shall construct recursively matrices in Matn(K) such that satisfies both U*U = I, and U* AU is (r, n -r)-blockdiagonal.
The first task is to express the above two conditions in terms of the Uk's. Multiplying out the series U = U0 + U1 . t + U2 . t2 + ... and U* = U*0 + U*1 .
where Qm is any antisymmetric matrix. Since Sm is determined by the matrices Uo,..., Umalready constructed, the task is to choose Qm in such a way that the resulting matrix Vm given by (4) is block-diagonal. Separating in (4) the two summands corresponding to (i, j, h) = (m, 0 , 0) and (i, j, h) = (0,0, m) we obtain Vm = U*m A0 + A0Um + 03A3 U*iAjUh. i ~m,A #m Substituting (5) into the above expression we obtain (6) Vm = -QmA0 + A0Qm + Tm where T m = -1 2 ( S m A 0 + A 0 S m ) + U * i A j U h .
Since Sm and all the Ak's are symmetric it follows that Tm is symmetric. Notice that Tm is expressed in terms of matrices already determined. By construction the matrix U = Uo + U1 . t + !72 ' t2 + ... is orthogonal and U* AU is block-diagonal. The proof of Lemma 2 is complete. 4.6. We can now finish the proof of Theorem 1 by an easy induction on the size n. The case n = 1 is trivial, so assume n > 1. Let there be given a symmetric matrix A in Matn(K) with initial coefficient Ao satisfying (1). By Lemma 5. Applications. The classical Spectral Theorem (for finite dimensions) states that every symmetric matrix can be orthogonally diagonalized over the field R of reals. Applying Theorem 1 repeatedly we deduce the following result. be the field of formal power series in m indeterminates with real coefficients. Then every symmetric matrix can be orthogonally diagonalized over K . Proof:.
By induction on m. The case m = 0 is the classical one, and the induction step is just the assertion "(a) =~ (b)" of Thm. 1.
The above Spectral Theorem can even be generalized to fields of power series in infinitely many variables as we shall now show. We start with a direct sum of infinitely many copies of the group of integers. r is an abelian, additive group under componentwise operations. We order r antilexicographically. The valued field (K, 03C6) is complete and henselian; for details we refer to [5] or [6]. Now we can state Theorem 3: Over the field K := R((F)) every (finite) symmetric matrix can be orthogonally diagonalized.
The point is to show that the orthogonal matrices ~~ can be chosen in such a way that (7) 03C0m(ûm+1) = Ûm.
If all the eigenvalues of the given matrix A are simple then (7) is automatically satisfied as is shown by a routine verification. In the case where A has multiple eigenvalues then the orthogonal matrices ~/~ are not unique and one has to choose a suitable basis in each eigenspace.
Since K is complete we easily deduce from (7) that the sequence m~N0 converges in the valuation topology to some matrix U by continuity we conclude that U is orthogonal and U* AU is diagonal. This completes the proof.