Rook placements in G 2 and F 4 and associated coadjoint orbits

. Let n be a maximal nilpotent subalgebra of a simple complex Lie algebra with root system Φ. A subset D of the set Φ + of positive roots is called a rook placement if it consists of roots with pairwise non-positive scalar products. To each rook placement D and each map ξ from D to the set C × of nonzero complex numbers one can naturally assign the coadjoint orbit Ω D,ξ in the dual space n ∗ . By deﬁnition, Ω D,ξ is the orbit of f D,ξ , where f D,ξ is the sum of root covectors e ∗ α multiplied by ξ ( α ), α ∈ D (in fact, almost all coadjoint orbits studied at the moment have such a form, for certain D and ξ ). It follows from the results of Andr´e that if ξ 1 and ξ 2 are distinct maps from D to C × then Ω D,ξ 1 and Ω D,ξ 2 do not coincide for classical root systems Φ. We prove that this is true if Φ is of type G 2 , or if Φ is of type F 4 and D is orthogonal.


Introduction and the main result
Let g be a simple complex Lie algebra, b be a Borel subalgebra of g, Φ be the root system of g, Φ + be the set of positive roots corresponding to b, n be the nilradical of b, N = exp(n) be the corresponding nilpotent algebraic group, and n * be the dual space to n.The group N acts on n by the adjoint action; the dual action of N on the space n * is called coadjoint; we will denote the result of this action by g.λ for g ∈ N , λ ∈ n * .According to the orbit method discovered by A.A. Kirillov in 1962, coadjoint orbits play a key role in representation theory of N (see [10], [11]).We will consider a special class of coadjoint orbits defined below.Definition 1.1.A subset D of Φ + is called a rook placement if (α, β) ≤ 0 for all distinct α, β ∈ D, where (−, −) denotes the inner product.
The root vectors e α , α ∈ Φ + form a basis of n; we denote by {e * α , α ∈ D} the dual basis of n * .Given a rook placement D and a map ξ : D → C × , we put Definition 1.2.We say that the coadjoint orbit Ω D,ξ of the linear form f D,ξ is associated with the rook placement D and the map ξ.
It turns out that almost all coadjoint orbit studied to the moment are associated with certain D and ξ (see, e.g.[1], [2], [12], [13], [14], [7], [3], [4], [5]).On the other hand, C.A.M. André discovered that, for the case of A n−1 , rook placements themselves provide a nice splitting of n * into a disjoint union of N -stable affine subvarieties called basic subvarieties.(We will recall André's results in detail in Section 2, because we will use them for the case of Direct computations show that this conjecture is true for classical root systems of low rank.In the present paper, we check that this conjecture is true for the case of G 2 .This is our first main result.In fact, given D and ξ, we present an explicit system of equations describing O D,ξ .where the union is taken over all non-singular rook placements D and all maps ξ : D → C × . It turns out that, for A n−1 , if D is a rook placement and ξ 1 , ξ 2 are distinct maps from D to C × then the associated orbits Ω D,ξ 1 and Ω D,ξ 2 do not coincide (it follows immediately from André's theory, since Ω D,ξ ⊂ O D,ξ , see Section 2).For other classical root systems this fact can be obtained as a corollary of the case of A n−1 (see also [2]).This was used by M.V. Ignatyev and I. Penkov in [8] and [6] for explicit classification of centrally generated primitive ideals in the universal enveloping algebra U (n) for classical root systems.
In [9], M.V. Ignatyev and A.A. Shevchenko, while classifying centrally generated primitive ideals in U (n) for exceptional root systems, proved that the analogous is true for certain orthogonal rook placements in F 4 and E 6 , E 7 , E 8 .This allows us to formulate the second conjecture for an arbitrary root system.Conjecture 1.5.Let D be a non-singular rook placement and ξ 1 , ξ 2 be distinct maps from D to C × .Then the associated coadjoint orbits Ω D,ξ 1 and Ω D,ξ 2 do not coincide.
Our second main result is that this conjecture is true for F 4 if D is orthogonal (i.e. if all roots from D are pairwise orthogonal).

André's theory
In this section, we briefly recall André's results from [1], which will be needed in the sequel.Throughout this section, Φ will be of type A n−1 .As usual, we identify the set of positive roots with the following subset of the Euclidean space R n : with the standard inner product.Here, ε 1 , . . ., ε n denotes the standard basis of R n .In this case, n can be identified with the space of strictly upper-triangular n×n matrices.Given α = ε i − ε j ∈ Φ + , one can pick the (i, j)-th elementary matrix e i,j as a root vector e α , so that [e α , e β ] = ±e α+β for α, β ∈ Φ + (we put e α+β = 0 if α + β / ∈ Φ + ).We will identify the dual space n * with the space n − of strictly lower-triangular n × n matrices via the formula λ, x = tr(λx) for x ∈ n, λ ∈ n − .The root vectors e α , α ∈ Φ + form a basis of n; let {e * α , α ∈ Φ + } be the dual basis of n * (in fact, e * i,j = e j,i ).The group N is the group of all upper-triangular n×n matrices with 1's on the diagonal.It acts on its Lie algebra n via the adjoint action Ad g (x) = gxg −1 , g ∈ N , x ∈ n.The dual action of N on the space n * is called coadjoint; we will denote the result of this action by g.λ, g ∈ N , λ ∈ n * .It is easy to check that, after the identification of n * with n − , this action has the form g.λ = (gλg −1 ) low .Here, given an n × n matrix a, we set (a low ) i,j = a i,j , if i > j, 0 otherwise.
Mikhail V. Ignatev, Matvey A. Surkov Definition 2.1.Pick a number k from 1 to n.We call the sets the k-th row and the k-th column of Φ + , respectively.We say that the number i (respectively, the number j) is the row (respectively, the column) of a root α = ε i − ε j .
Example 2.2.Let n = 6.On the picture below boxes from R 5 ∪ C 2 are grey.Here we identify a root ε i − ε j ∈ Φ + with the box (j, i).
To It follows immediately from the definition of a rook placement that for all k.This explains the term "rook placement": if we identify symbols ⊗ from f D,ξ with rooks on the lower-triangular chessboard, then these rooks do not hit each other.Now, given a root α ∈ D, we denote by ξ α the restriction of the map ξ to the subset {α}, and put Clearly, Ω D,ξ ⊂ O D,ξ .
Definition 2.4.The set O D,ξ is called a basic subvariety of n * corresponding to the rook placement D and the map ξ.
Accordingly to [1, Theorem 1], n * is a disjoint unions of basic subvarieties: where the union is taken over all rook placements in A + n−1 and all maps ξ : D → C × .Formally, André considered the case of finite ground field, but his proofs are valid over C, too.Furthermore, each basic subvariety O D,ξ is in fact an affine subvariety of n * , and André presented an explicit set of defining equations for it.To describe this set, we need some more notation.Namely, there is a natural partial order on Φ + : we write α ≤ β if β − α is a sum of positive roots.Evidently, ε i − ε j ≤ ε r − ε s if and only if s ≤ j and r ≥ i (in other words, on our pictures ε r − ε s is located non-strictly to the South-West from ) be the set of all rows (respectively, of all columns) of the roots from D(α).Finally, for a matrix λ ∈ n * , we denote by ∆ D α (λ) the minor of the matrix λ with the set of rows R D (α) and the set of columns C D (α).We assume that the numbers of rows and columns are taken in the increasing order.
On the other hand, for Remark 2.7.Actually, André's proof of the fact that each λ ∈ n * belongs to a certain basic subvariety O D,ξ is very straightforward.Namely, there is a total order ≤ t on Φ + refining the partial order ≤ defined above.By definition, ε i − ε j < t ε r − ε s if s < j or s = j, i < r.Now, given λ ∈ n * , we inductively construct O D,ξ containing λ.If λ = 0, then D = ∅ and ξ is the unique empty map from ∅ to C × .If λ = 0 then we find the smallest (with respect to ≤ t ) root α 1 from Φ + such that λ(e α 1 ) = 0, and put ).Then we add α 2 to D and define ξ(α 2 ) in the obvious way.Now, one can repeat this procedure to obtain the required basic subvariety O D,ξ .
3 Case Φ = G 2 In this section, we prove our first main result, Theorem 1.4.First, we briefly recall some basic facts about the simple Lie algebra g of type G 2 and its maximal nilpotent subalgebra n.By definition, the root system Φ = G 2 has the form Φ = Φ + ∪ Φ − , where In fact, one can choose the root vectors so that c 1 = 1, c 2 = 2, c 3 = 3, c 4 = 1, c 5 = 3, but we will not use these explicit values in the sequel.One can immediately check that c 1 c 5 = c 3 c 4 for an arbitrary choice of the root vectors.
Recall the definition of the group N = exp(n) and the coadjoint action of N on the dual space n * .It is straightforward to check that this action has the form Now, let D be a non-singular rook placement in Φ + .Recall that non-singularity means that γ / ∈ S(δ) for all distinct γ, δ ∈ D, where S(δ) denotes the set of δ-singular roots in Φ + .Fix a map ξ : D → C × , and recall that, by definition, Ω D,ξ is the coadjoint orbit of the linear form f D,ξ .It follows immediately that if γ is a maximal (with respect to the partial order ≤ on Φ + ) among all roots from D then λ(e γ ) = f D,ξ (e γ ) = ξ(γ) for all λ ∈ Ω D,ξ .Similarly, λ(e γ ) = 0 for all λ ∈ Ω D,ξ , if there are no δ ∈ D such that δ ≥ γ.Recall also the definition of O D,ξ .

D
System of equations for Proof.The proof will be performed for all rook placements in Φ + subsequently.First, assume that |D| = 1, and in that case, Ω D,ξ = O D,ξ .Pick a linear form λ ∈ O D,ξ .Then there exists x = γ∈Φ + x γ e γ ∈ n such that λ = exp(x).fD,ξ .Let us proceed case-by-case.Cases 1, 2, 3 from the table above are evident, so we start from case 4.
Proof of Theorem 1.4.Using Proposition 3.1, one can check that each λ ∈ n * belongs to exactly one orbit O D,ξ .Namely, pick a linear form λ ∈ n * .Then exactly one of the following cases can occur.
Rook placements in G 2 and F 4 and associated coadjoint orbits 139 and and The proof is complete.4 Case Φ = F 4 In this section we prove our second main result, Theorem 1.6.To do this, we firstly prove the following simple lemma.Let D be a non-singular orthogonal rook placement in Φ + , where Φ = F 4 , and ξ 1 , ξ 2 : D → C × be a map.Assume that there is the unique maximal root β 0 in D (with respect to the natural order on Φ + ).
where c j is the nonzero scalar such that [e γ j , e β 0 −γ j ] = c j e β 0 , while Hence, all x γ j are uniquely defined by µ.Now, let S k be the symmetric group on k letters.We obtain x γ j e γ j , k j=1 x γ j e γ j , . . .
Denote the second summand by F .Then F is uniquely defined by µ, because x γ j and ξ(β 0 ) are uniquely defined by µ.If Ω D,ξ 1 = Ω D,ξ 2 then there exist x 1 , x 2 , for which exp Now, we need a general construction, which can be applied to an arbitrary root system.Namely, let g, b and N = exp(n) be as in the introduction.Let h be the Cartan subalgebra of g such that g = n ⊕ h ⊕ n − , where n − is the nilradical of the Borel subalgebra opposite to b, and let Φ − be the set of negative roots.Then the root vectors e α , α ∈ Φ − , form a basis of n − .Further, let α 1 , . . ., α n be the simple roots from Φ, and h α i , 1 ≤ i ≤ n, be a basis of h such that {e α , α ∈ Φ} ∪ {h α i , 1 ≤ i ≤ n} is a Chevalley basis of g.
We fix a total order ≤ t on this basis such that e α < t h α i < t e −β for all α, β ∈ Φ + , 1 ≤ i ≤ n, and e α < t e β if α, β ∈ Φ and α > β.This identifies gl(g) with the Lie algebra gl dim g (C), and ad(n) with a subalgebra of the Lie algebra u of all the strictly upper-triangular matrices of gl dim g (C).
Let GL(V ) be the group of all invertible linear operators on a vector space V .Since we have fixed a basis for g, the group GL(g) can be identified with the group GL dim g (C), and exp ad(n) ∼ = N is identified with a subgroup of the group U of all upper-triangular matrices from GL dim g (C) with 1's on the diagonal.Furthermore, using the Killing form on g and the trace form on gl(g), one can identify n * with the space n − = e −α , α ∈ Φ + C and u * with the space u − = u T , where the superscript T denotes the transposed matrix.Under all these identifications, it is enough to check that the coadjoint U -orbits of the linear forms f D,ξ 1 and f D,ξ 2 are distinct.Here, given a map ξ : D → C × , we denote by f D,ξ the matrix We will now study the matrix f = f D,ξ in more detail.The rows and the columns of matrices from gl(g) are now indexed by the elements of the Chevalley basis fixed above.Given a matrix x from gl(g) and two basis elements a, b, we will denote by x a,b the entry of x lying in the ath row and the bth column.The following proposition was proved in [9].For the reader's convenience, we reproduce the proof here, because our main technical tool used in the proof of Theorem 1.6 is based on similar ideas.).Let Φ be an irreducible root system, and D be a non-singular rook placement in Φ + .Let β 0 be a root in D, ξ 1 and ξ 2 be maps from D to C × for which ξ 1 (β 0 ) = ξ 2 (β 0 ).Assume that there exists a simple root α 0 ∈ ∆ satisfying (α 0 , β 0 ) = 0 and (α 0 , β) = 0 for all β ∈ D such that β ≮ β 0 .Then we obtain f hα 0 ,e β 0 = −ξ(β 0 ) 2(α 0 , β 0 ) (α 0 , α 0 ) = 0.One may assume without loss of generality that h α 0 > t h α i for all α i = α 0 .We claim that f hα 0 ,eα = f e −γ ,e β 0 = 0 for all e α < t e β 0 and all e −γ , α, γ ∈ Φ + . (1) Indeed, if α / ∈ D then, evidently, f hα 0 ,eα = 0.If α = β ∈ D and e β < t e β 0 then β ≮ β 0 , hence because (α 0 , β) = 0. On the other hand, if f e −γ ,e β 0 = 0 for some γ ∈ Φ + then β 0 = β − γ.This contradicts the condition β 0 / ∈ S(β).
Our main technical tool generalizes the proposition above in the following way.Fix an order {β 1 , . . ., β m } on D and an order {α 1 , . . ., α n } on the simple roots in Φ + such that h α i < t h α j and e β i < t e β j for i < j.Note that Given J ∈ {1, . . ., m} and I ⊂ {1, . . ., n} with |I| = |J|, denote by ∆ J I (ξ) the minor of the matrix f with the set of rows {h α i , i ∈ I} and the set of columns {e β j , j ∈ J}.Furthermore, let ∆ J I be the determinant of the matrix, which (i, j)-th element equals Proposition 4.3.Assume that there exist an m-tuple I = (i 1 , . . ., i m ) such that, for all where Let ξ 1 and ξ 2 be maps from Proof.For simplicity, we denote Φ + 1 = {δ | e δ < t e β for all β ∈ D} and Φ + 2 = Φ + \(D∪Φ + 1 ).First, note that f e −γ ,e β j = 0 and f hα i ,e δ = 0 for all γ, δ ∈ Φ + 1 , α i ∈ ∆, β j ∈ D. Indeed, f e −γ ,e β j equals the coefficient of e β j in the expression β∈D ξ(β)[e β , e −γ ].But if this coefficient is nonzero then β − γ = β j for some β ∈ D, which contradicts the nonsingularity of D. On the other hand, f hα i ,e δ equals the coefficient of e δ in the expression On the picture below we draw schematically the matrix f .Marks Φ + 1 , D, Φ + 2 , ∆, Φ − mean that the corresponding rows and columns of the matrix f are indexed by e δ for δ ∈ Φ + 1 , e β j for β j ∈ D, e γ for γ ∈ Φ + 2 , h α i for α i ∈ ∆, e α for α ∈ Φ − respectively.We replaced by big zeroes the blocks ∆ × Φ + 1 and Φ − × D filled in zero entries.The minors ∆ J l I l are the determinants of submatrices of the grey block ∆ × D.
The Lie algebra u corresponds to the root system A N −1 , where N = |Φ| + rk Φ.Let D i be the subset of A + N −1 and ξ i : D i → C × be the map such that f i = f D,ξ i (as an element of u * ) belongs to the basic subvariety O D i , ξ i of u * defined in Section 2, i = 1, 2. Put J = {e α , α ∈ Φ} ∪ {h α i , α i ∈ ∆}.Each pair (x, y) ∈ J × J such that the (x, y)-th entry of f i lies under the diagonal corresponds to the unique root ε y − ε x ∈ A + N −1 .We denote the inverse map from According to André's theory, we may assume without loss of generality that ) We will prove that ξ 1 (β j ) = ξ 2 (β j ) for all 1 ≤ j ≤ m by induction on j.The case j = 0 (with I 0 = ∅) can be considered as an evident inductive base case.
Let j ≥ 1.Note that each β l , 1 ≤ l ≤ m, belongs to D i , and the intersection of τ ( D i ) with {h α i , i ∈ ∆} × {e β , β ∈ D} (i.e., with the "grey" area) coincides with {(h α i l , β l )} m l=1 (this follows immediately from Remark 2.7).Furthermore, recall the notion of ∆ D i α (f i ) for α ∈ A + N −1 from Section 2, where f i is considered as an element of u * .It also follows from Remark 2.7 that, for each l from 1 to m and for i = 1, 2, where const i,l is a scalar depending only on . By the inductive assumption, D 1,j = D 2,j and ξ 2 . We conclude that ξ 1 (β j ) = ξ 2 (β j ), as required and the proof is complete.From now on, let Φ = F 4 .Recall that the set ∆ of simple roots can be identified with the following subset of R 4 : Here {ε i } 4 i=1 is the standard basis of R 4 (with the standard inner product).The set of positive roots is as follows: Rook placements in G 2 and F 4 and associated coadjoint orbits 17 Proof.The proof is case-by-case and is completely straightforward.As an example, consider the 17th rook placement D.
Clearly, the root β 3 (respectively, β 1 ) is orthogonal to the unique simple root, namely, to α 3 (respectively, to α 4 ).There are no simple roots orthogonal to β 2 .Write out the minor of the matrix f , which rows correspond to h α i , α i ∈ ∆, and columns correspond to e β j , β j ∈ D. Recall the notion of p i,j introduced before Proposition 4.3.
Hence, in fact we have the only possibility for i 1 : i 1 = 3. Next, for i 2 = 4, one has Therefore, we have to put i 2 = 4. Finally, for i 3 = 2, we obtain Thus, there is the only candidate for i 3 : i 3 = 2.It is easy to check that the sequence (3, 4, 2) satisfies the conditions of Proposition 4.3.
All other rook placements from the table above can be considered similarly.
We are now ready to prove our second main result, Theorem 1.6, which claims that, for a non-singular orthogonal rook placement D ⊂ F + 4 and two distinct maps ξ 1 , ξ 2 from D to C × , the associated coadjoint orbits Ω D,ξ 1 , Ω D,ξ 2 do not coincide.
The first root β 1 is maximal among all roots in each of these rook placements.In the rook placements D 8 , D Next, it is straightforward to check that the following maximal rook placements D i (together with a simple root α 0 and a distinguished root β 0 ∈ D i ) satisfy the conditions of Proposition 4.2, except the non-singularity of D i : This implies that if D is a non-singular rook placement contained in one of these maximal rook placements and containing the root β 0 , then D, β 0 , α 0 satisfy the conditions of Proposition 4.2.Hence, if Ω D,ξ 1 = Ω D,ξ 2 then ξ 1 (β 0 ) = ξ 2 (β 0 ).Now, let D be a non-singular subset of one of the rook placements D 1 , . . ., D 10 .Assume that D ⊂ D 1 .Note that β i ∈ S(β j ) for all 2 ≤ i ≤ 4 and 1 ≤ j < i.
F 4 .)By definition, the basic subvariety O D,ξ corresponding to a rook placement D and a map ξ : D → C × is O D,ξ = α∈D Ω {α},ξα , where ξ α is the restriction of ξ to {α}.For A n−1 , n * = D,ξ O D,ξ and all O D,ξ 's are affine subvarieties of n * (see [1, Theorem 1]).Even for B n , C n and D n , the analogous question is still open.Nevertheless, we may formulate the following conjecture for an arbitrary root system.Non-singularity of a rook placement D means that if α, β ∈ D and α = β then α − β / ∈ Φ + ; for A n−1 , all rook placements are automatically non-singular.Conjecture 1.3.Each basic subvariety O D,ξ is an affine subvariety of n * , and n * = D,ξ O D,ξ , where the union is taken over all non-singular rook placements D and all maps ξ : D → C × .

Example 2 . 6 .
Let n = 10, D = {ε 1 − ε 6 , ε 3 − ε 10 , ε 5 − ε 8 }.On the picture below, boxes corresponding to the roots from D are filled by ⊗'s, as above, while the boxes corresponding by the D-singular roots are marked gray.It turns out that to each D-regular root α one can assign the defining equation of O D,ξ .

Remark 4 . 4 .
It follows from the conditions of Proposition 4.3 that if such an m-tuple I exists then it is unique.