BLAISE

Orthomodular spaces are the counterpart of Hilbert spaces for ﬁelds other than R or C . Both share numerous properties, foremost among them is the validity of the Projection theorem. Nevertheless in the study of bounded linear operators which started in [3], there appeared striking diﬀerences with the classical theory. In fact, in this paper we shall construct, on the canonical non-archimedean orthomodular space E of [5], two inﬁnite families of self-adjoint bounded linear operators having no invariant closed subspaces other than the trivial ones. Spectrums of such operators contain exactly one point which, therefore, is not an eigenvalue. We also study relations between the subalgebras of bounded linear operators of E , which are the commutant of each of these operators, and the algebra A studied in [3].


Introduction
A vector space V provided with a hermitian form Φ is an orthomodular space if for all linear subspaces U of V .Until 1979 Hilbert spaces over R, C or H were the only known examples of such spaces, but since then classes of non-classical orthomodular spaces have been constructed( [5], [2]).All of these new examples are infinite dimensional vector spaces over Krullvalued complete fields where hermitian forms induce non-archimedean norms.
The orthomodular space E considered from now on was the first nonclassical example, constructed in [5] (over an ordered field) and generalized -in valued fields context-in [2].Now, an outline of its construction is presented.
The value group of the Krull valuation of the base field K is where each Γ j is an isomorphic copy of the additive group of integers.Γ is ordered antilexicographically, i.e., if 0 = (g j ) j∈N ∈ Γ and m := max{j ∈ N : g j = 0}, then (g j ) j∈N > 0 ⇐⇒ g m > 0 in Γ m .
The base field K is the completion of K 0 with respect to the valuation, and ν 0 can be extended uniquely to a valuation ν on K with the same value group.
We define the K−vector space E by ξ 2 i X i converges in the valuation topology with componentwise operations.This vector space over K along with the anisotropic form Φ : is an orthomodular space (see [5], [2]).
Then, following the notation of [3], the assignment • : E −→ Γ∪{∞}, defined by x = ν(Φ(x, x)), satisfies the strong triangle inequality and induces a topology in E and the notion of Cauchy nets in E, for which E is complete.
Moreover, a subspace U of E is closed in this topology if and only if it is orthogonally closed, that is U ⊥⊥ = U ( [5]).
We shall also work here with elements of B(E), the algebra of linear operators B : E −→ E for which there exists an element γ ∈ Γ such that, for all x ∈ E, x = 0, B(x) − x ≥ γ.
In Section 2, we summarize all the geometric properties of E (its residual spaces and the definition of types in this space) and all the results concerning the algebra B(E) and the subalgebra A that will be necessary later on.In Section 3, the core of this work is developed: we define two infinite families of bounded operators on E, perturbations of the operator A studied in [3], and we prove that each element of these families is an indecomposable self-adjoint operator ( Theorem 3.1 and Theorem 3.5) that has non-empty spectrum (Theorem 3.6).Both families contain a sequence of bounded operators converging to A. Finally, in Section 4, we establish that all the commutant algebras of the operators defined are mutually distinct and that the intersection of each one of these algebras and A is minimal (Theorem 4.4 and Theorem 4.5).

Preliminaries
The required results of [5], [3] and [4] are condensed in this section.We will use here the notation and definitions of the last section.
Φ(e i , e j ) = 0 if i = j and Φ(e i , e i ) = X i .In addition, each x ∈ E can be uniquely written as a convergent series in the • -topology: An extremely useful technique for our work is the reduction of bounded operators to the residual spaces of E. Let us recall the definition of these spaces and some of their properties: The convex subgroups (see [6] for a definition) of Γ are exactly the subgroups A valuation ring K n := R n /J n is the residual field corresponding to ∆ n (we let Θ n : R n −→ K n be the canonical projection).It can easily be proved that From the strong triangle inequality of • it follows that is a module over R n and is a submodule.E n := M n /S n is a vector space over K n (π n : M n −→ E n is the canonical projection) by defining scalar multiplication by The vectors e i := π n (e i ), i = 0, 1, . . ., n, form an orthogonal basis for ( E n , Φ n ) and

Types in E.
Studying linear operators on E through their "reductions" to residual spaces relies strongly on the concept of types.In this subsection, we recall this definition for our particular space ( [3]) as well as some significant results using this concept.A type T (γ) is assigned to each γ = (g j ) j∈N ∈ Γ by A type is also assigned to every non-zero scalar and to every non-zero vector in the space: the type of ξ ∈ K * is defined by and the type of x ∈ E, x = 0, is Note that for each pair γ, γ ∈ Γ, The following results relate some geometric properties of E to the concept of types.
ii) Let U be a closed subspace of E. Then the same types occur in any two maximal orthogonal families in U .

B(E) and the subalgebra A.
Recall that B(E) is the algebra of linear operators B : Clearly, each linear operator on E is determined by the image of the standard basis {e i : i ≥ 0}.Then it can be represented by an infinite matrix.Since B ∈ B(E) is self-adjoint if and only if we have the following lemma Thus, a bounded operator M with matrix (m ij ) is self-adjoint if and only if for all i, j ≥ 0. Every operator in this work aside from being self-adjoint, also has the property defined below.Definition 2.6.A linear operator B : E −→ E is indecomposable if it admits no closed invariant subspaces of E with the exception of {0} and E.

Lemma 2.7 ([3]). A map B
These are the induced operators that allow us to study operators on E.
In [3], the authors study the operator A : E −→ E defined over the standard basis of E by The matrix of A in that basis, Additionally, A(e k ) − e k = 0 for all k ∈ N 0 .Thus A(x) − x ≥ 0 for all x ∈ E and A induces operators A n on every residual space of E.
In the following results, properties of the induced operators on the spaces E n are lifted to properties of the operator A defined on E.

Lemma 2.9 ([3]). If n ≥ 1 then the equation
in the variable ρ has no solution in K n .
As a consequence we have Lemma 2.10 ( [3]).The operator A n : E n −→ E n (n ≥ 1) has no eigenvectors.
Proof.Let U = {0} be a proper closed subspace of E, invariant under A.
Since E is orthomodular and U is closed, E = U ⊕ U ⊥ .In addition, since A is a self-adjoint operator, U ⊥ is also an invariant space under A.
Looking at the types of vectors in U and U ⊥ by Theorem 2.3(i) no type can occur in U and U ⊥ at the same time.Hence, by Theorem 2.3(ii), either U or U ⊥ contains a vector of type 0 and, without loss of generality, we can assume it is U .Hence there exists an integer n ≥ 1 such that U contains vectors of types 0, 1, . . ., n − 1 and U ⊥ contains a vector of type n.We examine the reduced operator A n on the residual space π n (U ) and π n (U ⊥ ) are invariants under A n .Let G be the (1-dimensional) subspace of U ⊥ spanned by a vector of type n.
. By the choice of n and by Theorem 2.3.(i),U ⊥ ∩ G ⊥ contains only vectors of types greater than n, therefore, by Lemma 2.4, π n (U ⊥ ∩ G ⊥ ) = {0}.Hence π n (U ⊥ ) = π n (G) is a onedimensional subspace of E n , invariant under A n .In other words, A n has an eigenvector.But we know this is impossible by Lemma 2.10.

The proof of Theorem 2.11 does not use the specific definition of A.
Then it can be used for any bounded self-adjoint operator, whose reduced operators have no eigenvectors.
As an immediate consequence of Theorem 2.11, we have
Finally, we summarize the main characteristics of the subalgebra A is a commutative algebra (Corollary 5.11 of [3]) and all its elements are self-adjoint (Corollary 5.5 of [3]).Since A is indecomposable, we have Lemma 2.14 ([3]).If B, C ∈ A coincide on some non-zero vector, then B = C.

Corollary 2.15 ([3]). Every non trivial operator of A is injective.
So, each element of A can be completely determined by its action on a single non-zero vector.In [4], the following formulas were established.

Construction of indecomposable self-adjoint operators
3.1.The operators B Q,s .
Let p, s ∈ N, such that 1 < p < s.Consider the set Q = {q 1 , . . ., q p } where q 1 < • • • < q p and q j ∈ {0, 1, . . ., s − 1} for j = 1, . . ., p . Put It is easy to check that B 0 Q,s (e i ) − e i = 0 for all i ∈ N 0 .By Lemma 2.7, B 0 Q,s can be extended linearly to an operator in The B Q,s matrix in the standard basis is: This is identical to the matrix of A in the same basis except for the indicated zeros.Then clearly this matrix satisfies (2.1) too and B Q,s is self-adjoint.
The following is the main result of this section.
Since B Q,s is a self-adjoint bounded operator, using the proof of Theorem 2.11, it is enough to prove that none of the induced operators of B Q,s on the residual spaces has eigenvectors.
Let B n := (B Q,s ) n be the operator induced by B Q,s on E n .To prove that B n (n ≥ 1) has no eigenvectors, we consider two cases: When n < s, B n is equal to the induced operator by A on E n , hence B n = A n has no eigenvectors (by Lemma 2.10).
The case n ≥ s requires a keener study.The problem of determining whether B n has eigenvectors is equivalent to the one of solving a finite system of equations.Thus, the goal of everything that follows will be to prove that such system has no solution.
Before proving this lemma, we will establish some facts.
Adding up all these equations and dividing the sum by η we get the next equation which has only the variable λ: This equation must have a solution λ ∈ K n , since the system (3.2) can be solved by our initial assumption.
and substituting in (3.3), we have Let us consider the equality (3.4) in K n−1 (X n ) (where K n−1 is an algebraic closure of K n−1 ).If deg ϕ(X n ) > 0, there exists a ξ ∈ K n−1 such that ϕ(ξ) = 0 and τ (ξ) = 0. Hence θ(ξ) = p and replacing in (3.4) we have If deg τ (X n ) > 0, we consider separately the cases n > s and n = s.In each one, we will consider two subcases: τ (X n ) has a non-zero root in and evaluating (3.4) in X n = ζ we arrive to a contradiction since The matrix of B pqr in the standard basis satisfies (2.1), then B pqr is a self-adjoint operator.
Likewise for the family of operators B Q,s , but through a much more difficult algebraic work ( [1]), we can prove that none of the induced operators by B pqr on the residual spaces has an eigenvector.Hence, proceeding analogously to Section 3.1, we get the following result.

Theorem 2 .
8 ([3]).Let B : E −→ E be an injective bounded linear operator on E. If { B(e i ) − e i : i ∈ N 0 } has an upper bound in Γ, then B is surjective and its algebraic inverse

3. 3 .
The spectrums of B Q,s and B pqr .In the previous two sections, we have proved that operators B Q,s as well as operators B pqr are indecomposable.Hence they do not have eigenvectors and the bounded operators B Q,s − λI and B pqr − λI are injective for all λ ∈ K. Recall that, by Lemma 2.8, an injective operator C ∈ B(E) is invertible if and only if the set { C(e i ) − e i : i ∈ N 0 } is bounded from above in Γ.Given λ ∈ K , the setsR Q,s = { (B Q,s − λ I)(e i ) − e i : i ∈ N 0 } and R pqr = { (B pqr − λ I)(e i ) − e i : i ∈ N 0 }differ in a finite number of elements from the set R A = { (A − λ I)(e i ) − e i : i ∈ N 0 }.