This document will not display correctly in Firefox. View in MSIE or Chrome instead.
This document requires the SYMBOL font (ab).
Clarity will be increased if the Arial Font is installed.
If some symbols are not displayed correctly (such as uparrow [ ↑ ] or times [×]
or half [½] )
, see Browser Issues. Use your browser View|Text Size control
to increase the text size if desired. This document is under continuous revision. Feedback appreciated.
Introduction
Geometric algebra provides a practical alternative to conventional
3D vector methods which extends far more readily to higher dimensions. It also provides a
coordinate independant symbolic geometry (an "algebra of directions") extendable into a geometric calculus of
profound relevance to areas as diverse as quantum physics and computer vision.
The purpose of this work is to provide a concise but comprehensive introduction and
broad reference for geometric algebra for those interested in it as a powerful computational
and theoretical resource that spans and unifies a diverse range of fields.
Fuller more formal mathematical treatments exist elsewhere and
this document can serve as a primer for tackling such works. It assumes familiarity with
"conventional" 3D constructs such as vectors and matrices and such basic mathematical functions
as cos(q) and e^{x}. Mathmatical notations used are defined in the glossary.
Multivectors, the "elements" of Geometric Algbera, are a generalisation of traditional vectors that provide a far richer mathematical structure than vectors alone. Many programmers will have encountered particular multivectors before in the form of complex numbers and quaternians and the closest analogy for the generalisation of vectors to multivectors is perhaps complex numbers as a generalisation of real numbers. Regarding real numbers as a special case of the more general "class" or "field" of complex numbers allows logarithms of negative numbers and additional functions such as "complex conjugation". By generalising vectors to a particular subset of multivectors not only can vectors be multiplied and divided by eachother, but we obtain a multitude of useful conjugations and bilinear products and can usefully define logarithmic and trigonometric functions of them. Since particular N+2 dimensional multivectors can represent arbitary N-dimesional lines, circles, planes, and spheres we can, for example, speak of taking the log of a sphere. Geometric objects become algebra elements ameniable to both purely symbolic and computational (numerical) manipulation.
The notion of multiplying two 3D vectors together is familiar to programmers as the so called vector cross product
a×b being a vector of direction perpendicular to vectors a and b with magnitude
|a||b| sin(q) where q is the angle subtended by a and b.
However, this only works in 3D. The fundamental essence of geometric algebra
is the geometric vector product ab = a.b + aÙb where . is the traditional
scalar-valued vector dot product and "2-blade" aÙb is the wedge or outter product
described later. Everything else follows from this.
This section describes what multivectors are mathematically and lists the many operations and products
that can be usefully applied to them
in considerable detail. Of necessity some of this material is mathematically intensive and the reader is
is encouraged to "skim" rather than
absorbing every product, conjugation, normalisation technique and logarithmic computation strategum
on their first pass.
The "point" of multivectors is what you can do with them, and this is addressed in the
later sections once the basics of the symbolic and computational manipulations are covered.
When first encountering multivectors they can seem bewildering, with a plethora of products and an overload of operators provided for their manipulation. But familiarity breeds respect. Given that multivectors are the "language" of (dynamic) geometry and by implication of nature itself, a half-dozen new symbols does not seem excessive.
In the Multivectors Programming chapter we describe how to impliment
multivectors of both low and high dimension N in C/C++.
In the Multivectors as Geometric Objects we will see some of the
applications of multivectors in the elegant representation and manipulation of N-dimensional
spheres, planes, lines and conics.
In the Multivectors as Transformations we will see how multivectors can also be used to transform, distort, displace, and morph
such geometric constructs.
In Multivector Arcana we cover some more esoteric mathematical
aspects of multivectors of less general interest.
In later sections we cover Multivector Calculus and the uses of multivectors
in physics, in particular Relativity and Quantum Mechanics.
This treatment favours
the contractive "computer scientist's" inner product a¿b over the semisymmetric
"physicist's" inner product a.b where possible because this is arguably the more fundamental
and has certain functional advantages.
Multivectors
"And therefore in geometry (which is the only science that it hath pleased God hitherto to bestow
on mankind), men begin at settling the significations of their words;
which settling of significations, they call definitions, and place them in the beginning of their
reckoning."
--- Thomas Hobbes, Leviathan
Although a notable advantage of geometric algebra is coordinate independance, we will initially take an orthonormal coordinate (basis) based approach here since this is likely to be the most practicable in a programming context.
Geometric algebra is essentially a set of arithmetical techniques for manipulating N-dimensional
vectors and to see how to properly multiply (and divide!) vectors we must first generalise the concept of
vectors and scalars.
Given k linearly independant N-dimensional vectors a_{1},a_{2},..,a_{k} from a vector space
U^{N}
their outer product a_{k} = a_{1}Ùa_{2}...Ùa_{k}
(not to be confused with the 3D vector "cross" product) is known as a blade
of grade (aka. step or degree) k,
or a k-blade.
[ We are interested initially in U^{N}=Â^{N}, the space of real coordinate N-D vectors, but will refer to
U^{N} to emphasise applicability to alternate (eg. nonEuclidean) spaces
which are of interest to us, particularly with regard to relativistic physics.
By "linearly independant" we here mean that no one of the a_{1},a_{2},..a_{k} can be expressed as a real-wighted sum of the others
]
The fundamental rules for Ù are
Antisymmetry: aÙb = -bÙa
( and so aÙa=0) for any vectors a and b ;
Linearity: aÙ(b+c) = aÙb + aÙc ; and
Associativity:
aÙ(bÙc) = (aÙb)Ùc.
A k-blade a_{k} can be thought of as representing an orientated and scaled k-dimensional
subspace of U^{N}, one in which all the vectors satisfy aÙa_{k} = 0.
We say that a_{1},a_{2},..,a_{k} are a k-frame for this subspace.
A linear "weighted additive" combination of k-blades is known as a k-vector. An example 4D 3-vector is
½e_{1}Ùe_{3}Ùe_{4} + (Ö7)e_{1}Ùe_{2}Ùe_{3} .
A 0-vector is a 0-blade is a scalar.
We refer to a k-blade as proper if k>=1.
A 1-vector is a 1-blade is a conventional vector.
A 2-vector
(aka. bivector) is a sum of scaled 2-blades and need not "reduce" to a single 2-blade for N>3.
2-blades can be considered geometrically as directed areas (ie. a plane and a signed scalar).
For N£3, any k-vector is a k-blade. This is true only for k<2 when N > 3.
[ Proof : ae_{1}Ùe_{3}+be_{3}Ùe_{1}+ge_{1}Ùe_{2}
= (e_{1}+(a/g)e_{3})Ù(ge_{2}-be_{3}) if g¹0,
(ae_{2}-be_{1})Ùe_{3} else .
All 3D 3-vectors are multiples of e_{1}Ùe_{2}Ùe_{3}
.]
This makes N=3 a fundamentally simpler case than N³4 to the extent that "geometric intuitions"
founded in 3D can be actively misleading for N³4.
Because of this, the reader is here advised to initally consider
multivectors as algebraic rather than geometric entitiesto be manipulated symbolicsllyand numerically using grammar rules
rather than pertaining to geoemetric constructions.
Consider the set Â_{N} (aka. G_{N} and Cl_{N} in the literature) of all linear combinations (with real "coefficients" or "wieghtings") of k-vectors for 0£k£N, our 1-vectors being taken from the N-dimensional vector space Â^{N}. Clearly Â^{N} Ì Â_{N} (or, more properly, is represented within Â_{N}) and infact Â_{N} has dimension å_{k=0}^{N} ^{N}C_{k} = 2^{k}, there being ^{N}C_{k} º N! (N-k)!^{-1} k!^{-1} distinct k-blades in N-dimensions.
We refer to the elements of Â_{N} as multivectors
An example Â_{3} multivector, albeit one unlikely to arise in practice,
is 3+e_{1}-4e_{2}+½e_{1}Ùe_{2}+(Ö7)e_{2}Ùe_{3} + pe_{1}Ùe_{2}Ùe_{3} .
We can thus think of a general N-D multivector as an arbitary real-weighted combination of 2^{N} distinct basis blades.
[ Mathematically, one can take the "blade coefficients" from any field or "number space".
It is the utilisation of Â - essentially identifying the blade "coefficients" or "coordinates"
with "scalars" (0-vectors) - that
distinguishes "geometric algebra" from more general Clifford algebras of only passing concern here.
If we allow "complex number" blade coeeficients, for example, we obtain a space C_{N} of dimension 2^{N+1} .
]
Multivectors can thus be represented with 2^{N} dimensional 1-vectors, with respect to
the "extended basis" generated by a given set of N linearly independant N-D basis 1-vectors e_{1},e_{2},...,e_{N} and this tells us how to add and subtract multivectors,
but not how to multiply and divide them. For that we will need the "geometric product" and its associated "subproducts".
Conflicting Terminologies
Some authors such as Pavsik use the term "k-vector" for what we will call a k-blade
and "multivector" for our k-vector (ie. a single-graded multivector), adopting the term polyvector for our multivector.
The term k-vector is sometimes used in the literature to refer to a k-dimensional 1-vector. However
it is more common to spell the number so that, for example, four-vector typically denotes a 1-vector in a 4D spacetime; three-vector denotes a
1-vector in Â^{3} and so on, and we will adopt this convention here.
Notations and Coordinates
A general N-D multivector is thus the sum of a scalar, a 1-vector, a 2-vector,..., and an N-vector
but we will frequently be interested in pure k-blades or k-vectors and will benefit from a notation that
that distinguishes such from general multivectors.
We will use the following fonts to denote geometric algebra elements
Font | Represents | Also known as |
a | Proper blade | |
a | General multivector | |
a | 0-vector | Scalar |
a | 1-vector | Vector, 1-blade |
a | 2-blade | |
a | 2-vector | Bivector |
a | 3-blade | |
a | 3-vector | Trivector |
a | (N-1)-vector | Hyperblade, Pseudovector |
a | N-vector | Pseudoscalar |
a | <0;N>-vector | Scalar-pseudoscalar pair. |
We are freuwently interested in issues such as whether the square of a multivector is a pure scalar
but in practice this may mean checking that any residual nonzero coordinates are a "negligable"
proportion of the scalar part, which can be problematic if the scalar part is also small.
We define the sparsity of a multivector with respect to a given basis
as the number of zero coefficients (coordinates) in its representation in that basis.
Inverse frames
Suppose we have a possibly non-orthonormal linearly independant basis of N N-D 1-vectors
providing a coordinate frame (ie. a set of axies) E=(e_{1},e_{2},...,e_{N}) for U^{N}.
We can construct a reciprocal or inverse frame
(e^{1},e^{2},...,e^{N}) so that e^{i}.e_{j} = 1 when i=j and 0 else
where . is the traditional scalar ("dot") product of two 1-vectors.
If E is orthogonal then provided no e_{i}^{2}=0 we can set e^{k} º e_{k}^{-2}e_{k} .
More generally we require
e^{k} º
(-1)^{k-1}(e_{1}Ù..e_{k-1}Ùe_{k+1}Ù...e_{N}) i^{-1}
where i=e_{12..N} ,
though this may not make much sense to the reader till he is more familiar with Ù and the pseudoscalar i discussed later.
We define a notation
e^{ij..m} º e^{i}Ùe^{j}Ù...e^{m} .
If E is orthogonal (ie. e_{i}.e_{j}=0 for i¹j) then
(using the geometric product defined below) we have e^{k}e_{k} = 1
(provided e_{k}^{2}¹0) ; but in general
e^{k}e_{k} has a non-zero 2-blade component because e^{k}Ùe_{k} ¹ 0 .
E induces both the coordinate expression
x = å_{i=1}^{N} x^{i}e_{i}
[ with x^{i} º e^{i}.x ],
and the reciprocal coordinate expression
x = å_{i=1}^{N} x_{i}e^{i}
[ with x_{i} º x.e_{i} ].
e^{i} is the 1-vector geometric multiplier that "seperates" 1-vector x
into x^{i} +
å_{j ¹ i }x^{j}e^{i}Ùe_{j}
.
In Â^{N} : (i) an orthonormal frame is self-inverse ( e_{i} = e^{i} ; x_{i} = x^{i} ) ; (ii) a general frame, expressed as an N×N matrix E with respect to a fixed orthonormal frame F=(f_{1},f_{2},...f_{N}) in conventional manner via ( e_{i} = å_{i=1}^{N} E_{ji}f_{j} ) has as its reciprocal frame the frame having matrix (E^{-1})^{T} = (E^{T})^{-1} with respect to F, ie. the inverse transpose matrix.
Letting
E_{ij} º e_{i}¿e_{j} and
E^{ij} º e^{i}¿e^{j} ,
we have x_{j} = å_{i=1}^{N} E_{ij}x^{i} ;
x^{j} = å_{i=1}^{N} E^{ij}x_{i} .
The N×N symmetric matrices {E_{ij}} and {E^{ij}}
are related by {E^{ij}} = {E_{ij}}^{-1} where ^{-1} is the conventional matrix inverse.
Inverse Frame Units
We discuss the mathematically ticklish issue of inverse frames here because we are discussing coordinate representations.
If frame vectors are assigned units, e_{1} having length 5 m say, then e^{1} must be regarded as having "length"
5^{-1} m^{-1} so that e^{1}¿e_{1} = 1 m^{0} is dimensionless. Coordinates
x^{i} =e^{i}¿x are then unitless while reciprocal coordinates
x_{i} = e_{i}¿x have units m^{2}.
Extended inverse frames
Given an extended basis {e_{[.i.]} : 0£i<2^{N} }
we can construct an extended pureblade inverse frame
{e^{[.i.]}} which satisfies
e^{[.i.]}_{*} e_{[.j.]}
º (e^{[.i.]}e_{[.j.]})_{<0>}
= 1 when i=j and 0 else .
The Geometric Product
"He who can propery define and divide is to be considered a god."
--- Plato
To make Â_{N} a linear space we require a "multiplication"
with the following properties:
a(bc) = (ab)c (associativity)
a(b+c) = ab + ac
; (b+c)a = ba + ca (distributivity)
aa = Sig(a)
where Sig(a) is a scalar for all 1-vector a (contraction).
Of principle interest here is the contraction Sig(a) = e|a|^{2}.
where e (the signature of a) is either ±1 or 0
and |a| is the conventional magnitude ("length") of 1-vector a.
A vector is null if a^{2}=0.
We write Â^{p,q,r}
for a vector space having orthogonal basis {e_{1},e_{2},...e_{N}} where
N=p+q+r and
Sig(e_{i}) =
1 for 1£i£p ;
-1 for p<i£p+q ;
0 for p+q<i£N .
We write Â_{p,q,r} for the associated geometric algebra.
We write Â_{p,q}
(aka. Cl_{p,q})
as an abbreviation for Â_{p,q,0}
and Â_{N} as an abbreviation for Â_{N,0,0} .
We define the geometric product of any a by a scalar
b in the obvious "coordinatewise"
commutative manner
(ba)_{[ij..m]} = (ab)_{[ij..m]}
º b(a_{[ij...m]}) .
We define the geometric product of two 1-vectors by
ab º a.b + aÙb where a.b
is the conventional Â^{N} (or U^{N}) vector "dot" product.
We can then extend this definiton by means of the associativity and contraction rules
to higher grade blades and hence
(by distributivity) to multivectors generally.
The geometric product is noncommutative (ab¹ba in general), but this is actually an assett; the "degree" of
noncommutativity of the geometric product of two multivectors being a measure of their orthogonality.
A unit multivector is a multivector satisfying (aa)_{<0>}=±1.
More generally we have ab= a¿b + aÙb where a is a 1-vector and b
is a general multivector.
abc=a¿((b¿c)+bÙc) + aÙ((b¿c)+bÙc)
= a¿(bÙc) + a(b¿c) +aÙbÙc)
abcd=
= a¿(b¿(cÙd) + b(c¿d) +bÙcÙd))
+ aÙ(b¿(cÙd) + b(c¿d) +bÙcÙd))
= aÙ(b¿(cÙd) + (aÙb)(c¿d) + aÙ(bÙcÙd))
Â_{2}
We can tabulate the geometric product for Â_{2} with respect to
a basis for Â_{2} derived
from an orthonormal basis {e_{1},e_{2}} for Â^{2}.
ab for Â_{2} | ||||||
1 | e_{1} | e_{2} | e_{12} | |||
1 | 1 | e_{1} | e_{2} | e_{12} | ||
b | e_{1} | e_{1} | 1 | e_{12} | -e_{2} | |
e_{2} | e_{2} | -e_{12} | 1 | e_{1} | ||
e_{12} | e_{12} | e_{2} | -e_{1} | -1 |
ab for Â_{3} | |||||||||
1 | e_{1} | e_{2} | e_{3} | e_{23} | e_{31} | e_{12} | e_{123} | ||
1 | 1 | e_{1} | e_{2} | e_{3} | e_{23} | e_{31} | e_{12} | e_{123} | |
e_{1} | e_{1} | 1 | -e_{12} | e_{31} | e_{123} | e_{3} | -e_{2} | e_{23} | |
e_{2} | e_{2} | e_{12} | 1 | -e_{23} | -e_{3} | e_{123} | e_{1} | e_{31} | |
b | e_{3} | e_{3} | -e_{31} | e_{23} | 1 | e_{2} | -e_{1} | e_{123} | e_{12} |
e_{23} | e_{23} | e_{123} | e_{3} | -e_{2} | -1 | e_{12} | -e_{31} | -e_{1} | |
e_{31} | e_{31} | -e_{3} | e_{123} | e_{1} | -e_{12} | -1 | e_{23} | -e_{2} | |
e_{12} | e_{12} | e_{2} | -e_{1} | e_{123} | e_{31} | -e_{23} | -1 | -e_{3} | |
e_{123} | e_{123} | e_{23} | e_{31} | e_{12} | -e_{1} | -e_{2} | -e_{3} | -1 |
Writing i º e_{123} º e_{1}Ùe_{2}Ùe_{3} = e_{1}e_{2}e_{3} we see from the above
table that i commutes with all multivectors and satisfies i^{2} = -1.
We also observe that
aÙb = i(a×b)
= (a×b)i
where a×b = (aÙb)e_{123}^{-1} is the conventional Â^{3} vector "cross" product
and that ia spans the plane normal to a.
We also have aÙbÙc = (a.(b×c))i.
We note in passing that the subspace Â_{3 +}
consisting of all
3D multivectors having no odd grade component, ie. the space of multivectors of the
form a + be_{23} + ge_{31} + de_{12}
is closed under the geometric product and is isomorphic to the
quaternion space Q, as is Â_{0,2} .
a + bi + gj + dk.
Biquaternions
We further note that a general Â_{3} multivector can be uniquely expressed as
(a + a_{2}) + (b + b_{2})e_{123}
where a,b are scalars and a_{2},b_{2}
are pure Â_{3} bivectors.
An alternative biquaternion (aka. complex four-vector aka. Pauli spinor)
representation of Â_{3} sets
i=i=e_{123} ,
s_{1}=e_{1} ,
s_{2}=e_{2} ,
s_{3}=e_{3}
(satisfying s_{i} s_{j}=e_{ijk}i s_{k}).
The biquaternion
(a_{0}+b_{0}i)
+(a_{1}+b_{1}i)e_{1}
+(a_{2}+b_{2}i)e_{2}
+(a_{3}+b_{3}i)e_{3}
is equivalent to the Â_{3} multivector
a_{0} + a_{1}e_{1}+a_{2}e_{2}+a_{3}e_{3}
+ b_{1}e_{23}+b_{2}e_{31}+b_{3}e_{12}
+ b_{0}e_{123} .
The product of two 3D 1-vectors is usually taken in this context to
be ab º a.b + i(a×b) which is equivalent
to the Â_{3} geometric product since i(a×b)=aÙb.
Overview
Given an orthonormal 1-vector basis e_{1},e_{2},..,e_{N} we can regard multivectors as complex weighted sums of the
2^{N} basis blades and such a "coordinates form" makes addition and multiplication particularly straightforward.
As an example computation consider
(1+2e_{13})(_3e_{1}+4e_{1234})
= _3e_{1}+4e_{1234} + 6e_{13}e_{1} + 8e_{13}e_{1234}
= _3e_{1}+4e_{1234} + 6e_{13}e_{1} + 8e_{13}e_{1234}
= _3e_{1}+4e_{1234} + 6e_{1}e_{3}e_{1} + 8e_{1}e_{3}e_{1}e_{2}e_{3}e_{4}
= _3e_{1}+4e_{1234} - 6(e_{1})^{2}e_{3} - 8e_{3}(e_{1})^{2}e_{2}e_{3}e_{4}
= _3e_{1}+4e_{1234} - 6e_{3} - 8e_{3}e_{2}e_{3}e_{4}
= _3e_{1}+4e_{1234} - 6e_{3} + 8(e_{3})^{2}e_{2}e_{4}
= _3e_{1}+4e_{1234} - 6e_{3} + 8e_{24} where we have assumed e_{1}^{2}=e_{3}^{2}=+1.
The "geometric product" can thus be viewed as a purely computational construct
in which we can think of sliding basis 1-vectors across eachother, introducing a sign flip for every basis 1-vector crossed,
until another instance of the same base 1-vector is encountered whereupon the two 1-vectors "condense"
into their ±1 scalar signature. The grade of the product of an orthonormal basis k-blade with an l-blade from the same basis
is thus £k+l.
Though we constructed our extended basis and defined the geometric product using the outter product Ù, we could have assumed only the
traditional scalar vector dot product to define basis orthonormality condition e_{i}.e_{j} =
e_{i} d_{ij} where d_{ij}=1 i=j and 0 else
; defined the extended basis elemnets by incorporating 1 and unordered pairs, triples, ... ,and N-tuples of
the e_{i}
into the basis written as e_{ij..m}; and finally defined the geometric product of two
basis elements
e_{ij..m}e_{no..r} logically in this "index hoping with sign tracking" manner.
Pseudoscalars
The N-vectors from a given U^{N}
are all equivalent apart from magnitude ("scale","volume") and sign ("handedness"). They are
accordingly known as pseudoscalars.
Conversely a nonzero pseudoscalar "spans" (and can be thought of as representing) U^{N} .
We will use the font a to denote a blade viewed as a pseudoscalar.
Let i º i_{N} be the unit pseudoscalar e_{12..N} for Â_{N}.
i satisifes i ^{2} = (-1)^{½(N-1)N} and
commutes with all multivectors if N is odd. For even N we have
ia_{k} = (-1)^{k}a_{k}i
so that i (anti)commutes with (odd)even blades.
We say a multivector is central if it commutes with all other multivectors.
For even N, only scalars are central but for odd N any <0;N>-multivector (scalar plus pseudoscalar)
is central.
An (N-1)-vector is sometimes refered to as a pseudovector but we favour the term hyperblade here.
Taking the geometric product of a multivector with i maps k-blades to (N-k)-blades and vice versa.
In particular it maps scalars to pseudoscalars (and vice versa)
and vectors to pseudovectors (and vice versa).
Note that a k-blade acts as a pseudoscalar when acting upon multivectors wholly contained
within the space it spans. The geometric product of a pseudoscalar i with a blade
a_{k} contained in the subspace spanned by i spans the subspace of i
complimentary (orthogonal) to subspace a_{k}.
In particular, the geometric product of any blade with itself is a scalar.
The signature of a blade b_{k},
e_{bk} ,
is the sign of b_{k}^{2}
(or zero if b_{k}^{2} = 0 in which case the blade is said to be null).
Duality
We define the dual of a multivector a with respect to a pseudoscalar i
spanning a space containing a by
a^{*} º ai^{-1} = a¿i^{-1}
[ Where ¿ is the contractive inner product defined below.
Some authors favour ai, but if i is a unit pseudoscalar the difference is only one of sign.
]
a^{*} spans the subspace of i "perpendicular" to pureblade a.
If b is an unmixed (ie. odd or even) multivector, it will either commute or anticommute with i.
For odd N, the pseudoscalar commutes with everything and we have
(a^{*})b = a(b^{*}) = (ab)^{*} .
For even N, i (anti)commutes with (odd) even multivectors and we have
(a^{*})b = a(b^{#}^{*}) = (a(b^{#}))^{*} where ^{#} is the grade involution conjugation defined
below.
In the presence of a standard basis for Â_{N}, computing a^{*} for the unit pseudoscalar i=e_{123..N} is a computationally trivial "shuffling" of coordinates requiring no numeric computations. For the bitwise ordering we have (ai^{-1})_{[.i.]} = ± a_{[.(i XOR (2N-1)).]} where the actual sign depends upon N and the bitwise parity of i.
The inverse dual or undual is defined by a^{-*} º a^{*}i^{2} = ai so that
(a^{-*})^{*} = a.
Centrality
Any multivector a defines an algebra _Cent(a) known as the centralizer of a
consisting of all multivectors that commute with a.
Bivectors
For N£4 the squares of bivectors commute with even multivectors while
for N=4 the pseudoscalar part of w^{2} is negated on commutaion with odd multivectors.
Left multiplication by a bivector ( a ® b_{2}a ) casts scalars into b_{2}.
If b_{2} is a 2-blade It rotates directions within b_{2} (by b_{2}),
while casting directions perpendicular to b_{2} into trivectors. For N=3, it casts bivectors in b_{2} to scalars
while bivectors normal to b_{2} are rotated by b_{2} and the pseudoscalar is cast to 1-vector b_{2}^{*}.
Right multiplication by a bivector has similar effects. Indeed a 2-blade commutes with all multivectors
in its dual space , which is not the case for a 1-vector. Consequently, it is occasionally advantageous
to represent 3D 1-vectors by their bivector duals.
Let b be a multivector having only bivector and scalar components.
a¿b = a.b while
b¿a = b.a + b_{0}a
b_{2}×a sends ^(a,b_{2}) to 0 and rotates ¯(a,b_{2}) in b_{2} by ½p, scaling it by |b_{2}|.
The operation a ® (ba).a for nonnull b is interesting, casting ¯(a,b) to 0 and
^(a,b) into
|^(a,b)|^{2}b
= |aÙb|^{2}(b^{-2})b
= |aÙb^{~}|^{2}b
.
Matrix representations
Multivectors can be represented with matrices (physicists have done so, largely unknowingly, for decades)
with the geometric product corresponding to the traditonal matrix product) but
matrices are seldom the best way to impliment multivectors computationally and tend to obscure the underlying geometries.
Nonetheless, we will describe some matrix representations here: partly to convince the more skeptical reader that
multivectors do actually "exist", and also to
provide an alternate model for those who have philosophical difficulties
with the concept of adding "different grade" blades to form a composite "entity". Geometric (multivector) algebra becomes
the algebra of a particular "form" of matrix, requiring only standard matrix multiply and inversion techniques. Such
an approach is computationally profligate, but can sometimes provide alternative insights as well as quick-and-dirty
programming applications exploiting existing matrix suites.
Â_{p.q.r} in Â_{2N×2N}
With regard to a particular extended basis, an N-D multivector a can be expressed as a 2^{N}
dimensional real 1-vector. But as a function mapping multivectors to multivectors a(x) = ax ,
ie.. a transform of 2^{N}-D 1-vectors, a can also be represented as a
2^{N}×2^{N} matrix.
Taking a=a^{0}+a^{1}e_{1}+a^{2}e_{2}+a^{12}e_{12} in Â_{p,q,r}
with p+q+r=2, for example,
we have
(a^{0}+a^{1}e_{1}+a^{2}e_{2}+a^{12}e_{12})(x^{0}+x^{1}e_{1}+x^{2}e_{2}+x^{12}e_{12})
= (a^{0}x^{0}+e_{+1}a^{1}x^{1}+e_{+2}a^{2}x^{2}-e_{+1}e_{+2}a^{12}x^{12})
+ (a^{1}x^{0}+a^{0}x^{1}+e_{+2}a^{12}x^{2}-e_{+2}a^{2}x^{12})e_{1}
+ (a^{2}x^{0}+a^{0}x^{2}-e_{+1}a^{12}x^{1}+e_{+1}a^{1}x^{12})e_{2}
+ (a^{0}x^{12}+a^{1}x^{2}-a^{2}x^{1}+a^{12})e_{12}
which we can express as
æ | a^{0} | e_{+1}a^{1} | e_{+2}a^{2} | -e_{+1}e_{+2}a^{12} | ö | æ | x^{0} | ö | |
ç | a^{1} | a^{0} | e_{+2}a^{12} | -a^{2} | ÷ | ç | x^{1} | ÷ | |
ç | a^{2} | -e_{+1}a^{12} | a^{0} | e_{+1}a^{1} | ÷ | ç | x^{2} | ÷ | |
è | a^{12} | -a^{2} | a^{1} | a^{0} | ø | è | x^{12} | ø |
Far more compact matrix representors for multivectors are typically available. The
following are all "maximally compact" in that they
require precisely 2^{N} real scalar parameters to hold a general N-D multivector.
Â_{2} in Â_{2×2}
1 | = | æ | 1 | 0 | ö | e_{1} | = | æ | 1 | 0 | ö | e_{2} | = | æ | 0 | 1 | ö | e_{12} | = | æ | 0 | 1 | ö |
è | 0 | 1 | ø | è | 0 | -1 | ø | è | 1 | 0 | ø | è | -1 | 0 | ø |
1 | = | æ | 1 | 0 | ö | e_{1} | = | æ | 0 | 1 | ö | e_{2} | = | æ | 0 | -1 | ö | e_{12} | = | æ | 1 | 0 | ö |
è | 0 | 1 | ø | è | 1 | 0 | ø | è | 1 | 0 | ø | è | 0 | -1 | ø |
Â_{3} has a "biquaternian" representation with Hermitian 2×2 complex matrices
1=1 | = | æ | 1 | 0 | ö | ; e_{1}= s_{1} | = | æ | 0 | 1 | ö | ; e_{2}= s_{2} | = | æ | 0 | -i | ö | ; e_{3}= s_{3} | = | æ | 1 | 0 | ö |
è | 0 | 1 | ø | è | 1 | 0 | ø | è | i | 0 | ø | è | 0 | -1 | ø |
ab | a | ||||
1 | s_{1} | s_{2} | s_{3} | ||
b | 1 | 1 | s_{1} | s_{2} | s_{3} |
s_{1} | s_{1} | 1 | -i s_{3} | +i s_{2} | |
s_{2} | s_{2} | +i s_{3} | 1 | -i s_{1} | |
s_{3} | s_{3} | -i s_{2} | +i s_{1} | 1 |
The s_{i} and 1 act as a basis for the full C_{2×2} algebra of complex 2×2 matrices since
æ | a | b | ö | = ½(a+d)1 + ½(b+c) s_{1} + ½i(b-c) s_{2} + ½(a-d) s_{3} |
è | c | d | ø |
Each element of the special unitary group SU(2) of all C_{2×2}
unitary (AA^{†}ºA(A^{^}^{T})=1) matrices having unit
positive determinant
can be expressed via U=(½iå_{j=1}^{3} q_{j}e_{j})^{↑}
for three real scalar parameters q_{j}. e_{1},e_{2}, and e_{3} are then refered to as the generators of SU(2)
so we can view SU(2) geometrically as the space of exponentiated Â_{3} 1-vectors.
SU(2) is isomorphic to the group SO(3) of all orthogonal
(AA^{T}=1) Â_{3×3} matrices, with
U (U)
æ | x-iy | ö | U^{-1} |
è | z | ø |
1=1 | = | æ | 1 | 0 | 0 | 0 | ö | ; e_{1} | = | æ | 1 | 0 | 0 | 0 | ö | ; e_{2} | = | æ | 0 | 1 | 0 | 0 | ö | ; e_{3} | = | æ | 0 | 0 | 1 | 0 | ö | ; e_{4} | = | æ | 0 | 0 | -1 | 0 | ö | |
ç | 0 | 1 | 0 | 0 | ÷ | ç | 0 | -1 | 0 | 0 | ÷ | ç | 1 | 0 | 0 | 0 | ÷ | ç | 0 | 0 | 0 | -1 | ÷ | ç | 0 | 0 | 0 | 1 | ÷ | |||||||||||
ç | 0 | 0 | 1 | 0 | ÷ | ç | 0 | 0 | -1 | 0 | ÷ | ç | 0 | 0 | 0 | 1 | ÷ | ç | 1 | 0 | 0 | 0 | ÷ | ç | 1 | 0 | 0 | 0 | ÷ | |||||||||||
è | 0 | 0 | 0 | 1 | ø | è | 0 | 0 | 0 | 1 | ø | è | 0 | 0 | 1 | 0 | ø | è | 0 | -1 | 0 | 0 | ø | è | 0 | -1 | 0 | 0 | ø |
1= | æ | 1 | 0 | 0 | 0 | ö ; | e_{1}= | æ | 0 | 0 | 0 | i | ö ; | e_{2}= | æ | 0 | 0 | 0 | 1 | ö ; | e_{3}= | æ | 0 | 0 | i | 0 | ö ; | e_{4}= | æ | 0 | 0 | 1 | 0 | ö ; | e_{5}= | æ | -i | 0 | 0 | 0 | ö |
ç | 0 | 1 | 0 | 0 | ÷ | ç | 0 | 0 | i | 0 | ÷ | ç | 0 | 0 | -1 | 0 | ÷ | ç | 0 | 0 | 0 | -i | ÷ | ç | 0 | 0 | 0 | 1 | ÷ | ç | 0 | -i | 0 | 0 | ÷ | ||||||
ç | 0 | 0 | 1 | 0 | ÷ | ç | 0 | -i | 0 | 0 | ÷ | ç | 0 | -1 | 0 | 0 | ÷ | ç | -i | 0 | 0 | 0 | ÷ | ç | 1 | 0 | 0 | 0 | ÷ | ç | 0 | 0 | i | 0 | ÷ | ||||||
è | 0 | 0 | 0 | 1 | ø | è | -i | 0 | 0 | 0 | ø | è | 1 | 0 | 0 | 0 | ø | è | 0 | i | 0 | 0 | ø | è | 0 | 1 | 0 | 0 | ø | è | 0 | 0 | 0 | i | ø |
We can construct matrices having 1 in the first to fourth entries of the first column (and zeroes elsewhere) respectively as
We can keep repeating this trick to create C_{16×16} representations of an orthogonal
1-vector basis for Â_{8.1}@Â_{6,3}@...@Â_{0,7} ;
C_{32×32} representations for
Â_{11}@Â_{9.2}@...@Â_{1,10} ;
C_{64×64} representations for Â_{12,1}@...@Â_{0,13}
and so on.
Other matrix representations
It can be shown that Â_{0,4} @ Â_{4,0} @ Q_{2×2} ;
Â_{0,6} @ Â_{8×8} ;
Â_{6,0} @ Q_{4×4} ;
Â_{7,0} @ Â_{5,2} @ C_{8×8} ;
Â_{0,8} @ Â_{8,0} @ Â_{16×16}
[
where Q_{2×2} denotes the space of 2×2 quaternion matrices etc.
]
but we do not provide example basies here. More generally, for even p+q, Â_{p,q}
is isomorphic to either
a real, complex, or quaternion matrix algbera. For odd p+q we sometimes have a sum (ordered pair) of two such
matrix algebras.
Adding Blades
Expressing multivectors in coordinate or matrix forms induces a natural addition of multivectors
and provides a potent computational technique, but regarding
multivector a as being the real weighted sum of the 2^{N} outter product progency
of N elemental "generators" e_{1},e_{2},...e_{N} or a particular real matrix risks loosing sight of duality.
All the "information" in a blade b is also "encoded" in its dual bi^{-1} and we
might equally naturally regard a as bb +gc + ..
where b = b_{1} + b_{2}i etc. as a sum of "complex"-weighted
blades, particularly for odd N when i is central and (when i^{2}=-1) we have
Â_{p,q} @ C_{p-1,q} @ C_{p,q-1}.
Because we can express a in terms of an extended basis we know that any multivector a
can be expressed as a complex-weighted sum of 2^{N-1} blades but it is the minimal number of blades necessasry
that best "classifies" the multivector. We define the positive integer spread of a multivector a to be the
number of blades
in its "irreducible" real weighted summation. The spreads of 1 and of e_{13} + e_{23} = (e_{1}+e_{2})Ùe_{3} are 1, for example.
However the spread does not fully categorise the "complexity" of a because we don;t know how many blades
appear with their duals.
We will refer to the
Expression of a multivector a suach as bb +gc of reduced spread 2
in coordinate form "smears" b and c together. The spread and reduced spreads are not apparent.
Though the addition of multivectors is well defined and frame-independant, it is in many
cases of somewhat dubious merit, particularly when adding high glade blades. Though the coordinate approach
facilitates and encourages multivector additions and we will do so freely throughout this work,
the strict mathmatical purist is nonetheless correct to maintain an element of unease.
Informally, multivectors are made for multiplying, not adding.
Multivector Products
Restricted products
Given the geometric product we can define a large variety of partial-geometric or
restricted products
by evaluating ab and then throwing away (zeroing the "coordinate" of) one or more particular
blades. We might zero all the odd blades,
for example, but more useful restricted products arise when the blades we zero are determined by the blades we multiply. The outter product is an example,
with a_{k}Ùb_{l} = (a_{k}b_{l})_{<k+l>} . Though all these products can in principle be implimented using a geometric product primative,
this is frequently grossly computationally inefficient and we will later see that one should generalise
our encoded multivector product to "streamline" the evaluation of the more useful restricted products.
There are two "outer" products and a plethora of potential "inner" products,
most of which have something to commend them. All are restrictions of the geometric
product in that their result for two extended basis blades is either the same as that of the geometric product or zero.
From a programmer's perspective, the contractive
inner product ¿ (defined below) is the most fundamental inner product. The reader should accordingly familiarise himself
with ¿ , and also the Hestenes (.) product since it is ubiquitous in much of the literature, often required
in this work, and (unlike ¿) is "dual" to the outter product . The other inner produts (·, ë, ¿_{+}) are not used here and seldom
seen elsewhere. They may have their place in a particular application, however, and are defined here for completeness.
The outer product
Given the geometric product, we can fully define the outer or exterior product
(aka. wedge product)
by defining
a_{k}Ùb_{l} º
(a_{k}b_{l})_{<k+l>}
where a_{k},b_{l} are blades of any grade;
and thence extending over multivectors by insisting on associativity and multilinearity.
From this definition it follows that:
aÙb = bÙa = ab ;
a_{k}Ùb_{l} = (-1)^{kl}b_{l}Ùa_{k} ;
aÙb = ½(ab - ba) º a×b ;
aÙb_{k} = ½(ab_{k} + (-1)^{k}b_{k}a)
= ½(ab_{k} + b_{k}^{#}a)
;
b_{k}Ùa = (-1)^{k}aÙb_{k}
a_{1}Ùa_{2}Ù...a_{k} = k!^{-1} å_{ij..m}e_{ij..m}
a_{i}
a_{j}...
a_{m}
summing over all k! permutations of {1,2,..k} .
(aÙb)^{2} = (a¿b)^{2} - (a^{2})(b^{2})
= -a^{2}b^{2} sin(q) where q is angle subtended by a and b
[ Proof : (aÙb)^{2} = -(aÙb)(bÙa)
= -(ab-a.b)(ba-b.a)
= -abba+(a.b)(ab+ba)-(a.b)^{2}
= -a^{2}b^{2} + 2(a.b)^{2} - (a.b)^{2} .]
(a_{1}Ùa_{2}Ù...a_{k})^{2}
= (-1)^{½k(k-1)} (k! Volume(k-simplex of points 0,a_{1},a_{2},...a_{k}))^{2}
= (-1)^{½k(k-1)} (Volume(parallelepiped with edge vectors a_{1},a_{2},...a_{k}))^{2}
a_{k}Ùa_{k}=0 is true for single proper blade a_{k} but not for general k-vectors when k>1.
Note that 0-blades (scalars) have aÙa=a^{2} .
aÙb for Â_{3} | |||||||||
1 | e_{1} | e_{2} | e_{3} | e_{23} | e_{31} | e_{12} | e_{123} | ||
1 | 1 | e_{1} | e_{2} | e_{3} | e_{23} | e_{31} | e_{12} | e_{123} | |
e_{1} | e_{1} | 0 | -e_{12} | e_{31} | e_{123} | 0 | 0 | 0 | |
e_{2} | e_{2} | e_{12} | 0 | -e_{23} | 0 | e_{123} | 0 | 0 | |
b | e_{3} | e_{3} | -e_{31} | e_{23} | 0 | 0 | 0 | e_{123} | 0 |
e_{23} | e_{23} | e_{123} | 0 | 0 | 0 | 0 | 0 | 0 | |
e_{31} | e_{31} | 0 | e_{123} | 0 | 0 | 0 | 0 | 0 | |
e_{12} | e_{12} | 0 | 0 | -e_{123} | 0 | 0 | 0 | 0 | |
e_{123} | e_{123} | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
We define the contractive inner product
(aka. Lounesto inner product)
(¿
or
û
) by
a_{k}¿b_{l} º
( a_{k}b_{l})_{<l-k>}
for l³k
,
0 else
where a_{k},b_{l} are blades of any grade;
and thence extending over multivectors by insisting on bilinearity.
¿ is neither associative nor commutative (symmetric).
In particular, (a¿a)=a^{2}
so (a¿a)¿b=a^{2} b
but a¿(a¿b) = 0 .
For k-vectors with k>1, contraction with a 1-vector a corresponds to orthogonal projection
into subspace a^{*} so
a¿b is the "component factor" of blade b perpendicular to 1-vector a .
We have the following properties:
a¿b for Â_{3} | |||||||||
1 | e_{1} | e_{2} | e_{3} | e_{23} | e_{31} | e_{12} | e_{123} | ||
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
e_{1} | e_{1} | 1 | 0 | 0 | 0 | 0 | 0 | 0 | |
e_{2} | e_{2} | 0 | 1 | 0 | 0 | 0 | 0 | 0 | |
b | e_{3} | e_{3} | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
e_{23} | e_{23} | 0 | e_{3} | -e_{2} | -1 | 0 | 0 | 0 | |
e_{31} | e_{31} | -e_{3} | 0 | e_{1} | 0 | -1 | 0 | 0 | |
e_{12} | e_{12} | e_{2} | -e_{1} | 0 | 0 | 0 | -1 | 0 | |
e_{123} | e_{123} | e_{23} | e_{31} | e_{12} | -e_{1} | -e_{2} | -e_{3} | -1 |
¿ is sometimes known as left-contraction or onto-contraction as opposed to the
right- or by-contraction defined by
a_{k} ë b_{m} º
( a_{k}b_{m})_{<k-m>} for k³m ; 0 else
or equivalently by aëb = (a^{§}¿b^{§})^{§} where ^{§}
is the reverse operator defined below.
The semi-commutative inner product
Some authors favour the semi-symmetric
or semi-commutative inner product
(aka. Hestenes inner product) (.) defined by
a.b_{k} º b_{k}.a º 0
where b_{k} is any blade ;
a_{k}.b_{m} º
( a_{k}b_{m})_{<|k-m|>}
where a_{k},b_{m} are proper (nonscalar) blades;
and thence extending over multivectors by inisiting on bilinearity.
The result is neither associative nor commutative ("symmetric").
It is "semi-symmetric" in that
a_{j}.b_{k} = (-1)^{j(k-j)}b_{k}.a_{j}
for pure j and k-vectors a_{j},b_{k} with j£k.
Note that scalars (0-blades) a,b satisfy aÙb = ab ;
a¿b = a.b = 0 so we can think of scalars as "self-orthogonal".
We obtain the identities
a.b for Â_{3} | |||||||||
1 | e_{1} | e_{2} | e_{3} | e_{23} | e_{31} | e_{12} | e_{123} | ||
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
e_{1} | 0 | 1 | 0 | 0 | 0 | e_{3} | -e_{2} | e_{23} | |
e_{2} | 0 | 0 | 1 | 0 | -e_{3} | 0 | e_{1} | e_{31} | |
b | e_{3} | 0 | 0 | 0 | 1 | e_{2} | -e_{1} | 0 | e_{12} |
e_{23} | 0 | 0 | e_{3} | -e_{2} | -1 | 0 | 0 | -e_{1} | |
e_{31} | 0 | -e_{3} | 0 | e_{1} | 0 | -1 | 0 | -e_{2} | |
e_{12} | 0 | e_{2} | -e_{1} | 0 | 0 | 0 | -1 | -e_{3} | |
e_{123} | 0 | e_{23} | e_{31} | e_{12} | -e_{1} | -e_{2} | -e_{3} | -1 |
We will use ¿ in preference to . whenever possible trhoughout this work.
a¿b = a.b " 1-vector a if b_{<0>}=0 so in many
equations ¿ and . are interchangeable.
However, only . is dual to Ù in the sense that
a.(bi) = (aÙb)i
; ie.
aÙ(bi) = (a.b)i ;
or equivalently a.(b^{*}) = (aÙb)^{*}
; aÙ(b^{*}) = (a.b)^{*}
;
a.b = (aÙ(bi^{-1}))i
;
aÙb = (a.(bi^{-1}))i
for any 1-vector a and multivector b
.
In particular aÙ(b.c)
= (a.(bÙ(ci^{-1})))i , ie.
(aÙ(b.c))^{*} = a.(bÙ(c^{*}))
.
We cannot replace . by ¿ in these fundamental duality equations.
In Â_{3} we have a×b = (aÙb)i^{-1}
; a×(b×c) = (aÙ((bÙc)i^{-1}))i^{-1}
= (a.c)b - (a.b)c
; and aÙbÙc = (a.(b×c))i where × denotes the traditional Â^{3} vector product.
The "fatdot" inner product
A variation of the sem-commutative inner product defined by
a_{k}·b_{m} º
( a_{k}b_{m})_{<|k-m|>}
k,m ³ 0.
This is sometimes known as the modified Hestenes or dot product but we will call it the fatdot product
here to avoid confusion with the traditional (Hestenes) inner product .
The multiplication table for · is the same as that for . except with regard to scalars.
The "scalar" row and column are filled as for Ù , ie.
according to 1·a = a·1 = a .
We can think of · as "abrogating" the "scalar handling" from Ù, reducing it
to the "thin" outter product ^. Thus · and ^ provide an alternate "decomposition" of the geometric product
to . and Ù which may be fruitful in some contexts but will not be exploited here.
The forced Euclidean contractive inner product
The forced Euclidean contractive inner product ¿_{+} "overrides" the signatures of the vectors
on which it operates. In a Euclidean space Â_{N} ,
¿_{+} = ¿ . In Â_{p,q} , ¿_{+} is defined with regard to
an orthonormal basis {e_{i}} .
We take 1¿_{+}a º a and e_{i}¿_{+}e_{j} º |e_{i}¿e_{j}| and extend ¿_{+} bilinearly.
The tabulation of ¿_{+} with regard to the extended (multivector) basis for Â_{p,q}
is thus the tabulation for ¿ for Â_{p+q}.
Although frame dependant, ¿_{+} is a useful product computationally, since one can sometimes simply "lift" a problem
(such as computing meets and joins) in a nonEuclidean space into Euclidean space and solve it there.
We define the frame-dependant
forced Euclidean geometric product by extending a¨_{+}b º a¿_{+}b + aÙb
over Â_{p,q} linearly and asscoiatively so that the tabulation of ¨_{+} for a given basis for
Â_{p,q} is the tabulation for
¨ for Â_{p+q}.
We will later consider higher dimensional embeddings of U^{N} obtained by adding "extension" dimensions e_{+} and e_{-}
to a basis and in such cases will often wish to retain the negative signature of the extender
while forcing U^{N} Euclidean. We represent the unextended forced Euclidean contractive product by ¿_{(+)} and the
unextended forced Euclidean geometeric product by ¨_{(+)} .
Commutator product
The antisymmetric commutator or product is defined by
a×b º ½(ab-ba). It is our first non-partial-geometric
product, potentially containing blades not present in ab.
[ aka. , without the ½ factor, as the Lie product ]
× is nonassociative, and often represents the appropriate generalisation of the
Â^{3} vector product ×.
We have
If a and b have a^{^}=a and b^{^}=b or a^{^}-a and b^{^}=-b for reversing conjugation ^{^} then (a×b)^{^} = -a×b . Thus the commutator product of two same grade blades has grade <2;6;10;...>.
We have 2^{3N} frame-dependant commutation coefficients C^{ij...}_{kl... } lm... º e^{ij..} _{*} (e_{kl..}×e_{lm...}) which can be regarded as measuring the nonorthogonality of the frame.
With (a×) denoting the operator v ® a×v, we have
(a×)(b×)v =
½(a×)(bv-vb) =
¼(abv +vba - avb - bva) .
Whence
((a×)×(b×))v =
¼((a×b)v + v(b×a))
= ½(a×b)×v .
a×b for Â_{3} | |||||||||
1 | e_{1} | e_{2} | e_{3} | e_{23} | e_{31} | e_{12} | e_{123} | ||
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
e_{1} | 0 | 0 | -e_{12} | e_{31} | 0 | e_{3} | -e_{2} | 0 | |
e_{2} | 0 | e_{12} | 0 | -e_{23} | -e_{3} | 0 | e_{1} | 0 | |
b | e_{3} | 0 | -e_{31} | e_{23} | 0 | e_{2} | -e_{1} | 0 | 0 |
e_{23} | 0 | 0 | e_{3} | -e_{2} | 0 | e_{12} | -e_{31} | 0 | |
e_{31} | 0 | -e_{3} | 0 | e_{1} | -e_{12} | 0 | e_{23} | 0 | |
e_{12} | 0 | e_{2} | -e_{1} | 0 | e_{31} | -e_{23} | 0 | 0 | |
e_{123} | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
We extend this definiton of × to give commutator "products" of operators and functions. If ¦(a) and g(a) are multivector-valued functions of multivectors we define (¦×g)(a) º ½(¦(g(a)) - g(¦(a)) º ½(¦g(a) - g¦(a)) .
We also define ¦^{×}(a,b) º ¦(a)×¦(b) .
We define the k-fold commutator product
by
a×^{k}b º a×(a×(....(a×(a×b))..))
= (-1)^{k} ((...(b×a)×a)×...)a
where there are k a's and ×'s in the sequence.
a^{×0}b = b ;
a^{×1}b = ½(ab-ba) ;
a^{×2}b = ¼(a^{2}b-2aba+ba^{2}) ;
... ;
a^{×k}b
= 2^{-k}å_{i=0}^{k}
(-1)^{k} ^{k}C_{i}
a^{k-i}ba^{i} .
AntiCommutator product
The symmetric anticommutator product is defined by
a~b º ½(ab+ba).
The geometric product decomposes naturally as
ab = a×b + a~b .
Provided a^{2} commutes with b and b^{2} commutes with a, then
a and b both commute with a~b and anticommute with a×b. Further
a×b commutes with a~b.
We have
The anticommutator product of two same grade blades has grade <4;8;12;...>.
Scalar product
The commutative (symmetric) scalar product is defined by
a_{*}b º (ab)_{<0>} = (ab)¿1 .
It satisfies
a_{k *}b_{k} = a_{k}¿b_{k} .
Taking the scalar product with 1 corresponds to taking the scalar part
a_{*}1 = a_{<0>} = a¿1 .
We have the powerful cyclic scalar rule
(abc)_{<0>} = (cab)_{<0>}
for all multivectors a,b,c
From a programmers' perspective the scalar product can be regarded both as a multivector-valued product returning a 0-grade
(ie. scalar valued) multivector, and as a real-valued bilinear function of two multivectors. For code efficiency it is frequently preferable to
impliment it returning a real rather than a multivector type.
Scalar-Pseudoscalar product
The commutative scalar-pseudoscalar product is defined by
a_{*}b º (ab)_{<0,N>} = (ab)¿(1+i) .
For N odd we will also refer to _{*} as the central product since it is central-valued
and we have the vastly powerful cyclic scalar-pseudoscalar rule
(abc)_{<0;N>} = (cab)_{<0;N>}
.
For even N, the "central product" coincides with _{*} rather than _{*}.
Inversive product
a*b º (a¿b) / (|a||b|) .
Thus a*b = cos(q_{Ð}(a , b))
where the magnitude |a| is as defined below and q_{Ð}(a,b) denotes the "angle subtended by"
1-vectors a and b.
Delta products
The greater delta product D (aka. disjoint) is a nonbilinear restrictive product
returning the highest grade nonvanishing component of the geometric product, or zero if the geometric product is zero.
This can be computed, albeit somewhat inefficiently, by calculating the geometric product keeping track of, or subsequently seeking,
the highest grade attained by the blades in the product, and then restricting the product to only this grade.
It is most commonly applied to blades and spans the "difference" between them, so for example
((e_{1}+e_{2})Ùe_{34}) D e_{235} = -e_{1245}. It is profoundly useful in constructing meets
and joins.
Of course, calculating a delta product can present us with dilemas when small coordinate values are encountered. If a product ab has a large scalar part and some comparatively tiny 2-blade coordinates, is the highest "nonvanishing" grade zero or two? Are the small 2-vector coordinates a genuine geometrical artifact, or just "failed zeroes" arising from finite precision computations? What magnitudes may we discard? Our choice will substantively effect the grade and magnitude of aDb and our computations risk becoming more art than science. Nonetheless, the delta product is profoundly useful since in many cases in can be computed unambiguously and is then fundamental in constructing meets, joints, and disjoints as described later.
We also have the lesser delta product returning the nonzero component of minimal grade
adb = (ab)_{Min} .
Conjugative Products
For every multivector conjugation ^{^} there are two associated restricted geometric products.
One restricting to the "^{^}-real" blades (b^{^} = b) , and one to the "^{^}-imaginary"
blades with b^{^}=-b.
Rescaled Product
The recaled product is defined by a§b º (a_rsccb_rscc)_rscc where
x_rscc is an arbitary, potentially frame dependant, computationally convenient rescaling normalisation of x
such as division by the maximal absolute coordinate, or the sum of absolute coordinates, of x.in a particvular frame..
a§b is equivalent to a¨b apart from scale.
The recalsed product is particularly useful when evaluating sandwich rotor products such as
(½qa)^{↑} x (-½qa)^{↑} for large q when intermediary coordinates can aquire collosal magnitudes
and rounding errors become significant.
The evaluation of (½qa)^{↑} § x § (-½qa)^{↑}.is usually far better behaved ;
and the correct scale can easily be recovered for non null x from
,
((½qa)^{↑} x (-½qa)^{↑})^{2}
= x^{2}
whenver x^{2} commutes with a .
Pure Product Rule
a_{k}b_{l} = (a_{k}b_{l})_{<|k-l|>}
+ (a_{k}b_{l})_{<|k-l|+2>}
+ .. + (a_{k}b_{l})_{<k+l>}
=
å_{m=0}^{½(k+l-|k-l|)} (a_{k}b_{l})_{<|k-l|+2m>}
hence the geometric product of two pure multivectors is either odd or even
(as defined under Involution below)
according as integer k-l is odd or even.
[ Proof : True for k=1 and then by induction on k for single k-blade a_{k}. Result follows by distributivity.
.]
The Intersective Product
The intersective product is a not a restricted geometric product and is frame dependant.
It picks out the "shared axies" eg. e_{3}_{Ç}e_{123}=e_{3}. In programming terms, while the
geometeric product is based on the exclusive or (XOR) of the bitwise ennumeration indices,
the intersective product is based on their logical AND.
We bring in the sign changes due to commutations ( e_{2}_{Ç}e_{12} = -e_{2}) but not the signatures.
We will later encounter the frame-independant meet operation aÇb and while we do have
e_{[.j.]}_{Ç}_{[.k.]})
= e_{[.j.]}Ç_{[.k.]})
= ±e_{[.j AND k.]}
we will see that Ç is not distributive across + for same grade blades.
(e_{1}+e_{2})Çe_{1} = 0 while
(e_{1}+e_{2})_{Ç}e_{1} = e_{1} .
Precedence Conventions
In much of the GA literature, precendence conventions are adopted for multivector products with
outter products taking precedence over inner products which in turn take precedence over geometric products so that,
for example, a.bcÙd.e denotes
(a.b)((cÙd).e) .
We will not rely on such conventions, favouring explicit brackets with regard to product symbol scope throughout this work,
in acordance with programmer morality.
Multivector Operations
We will define a number of right-operators ^{§}, ^{†}, ^{^}, ^{2} that act on multivectors to return multivectors.
For all of these, we adopt the precedence convention that they apply leftwards prior to all products.
Thus xy^{§} denotes x(y^{§}) rather than (xy)^{§} ;
xy^{2}z denotes x(y^{2})z ; and so forth.
We introduce and adopt the "Bellian notation"
a_{^}(b) º (a)b(a^{^}) º aba^{^} .
If a^{^}^{^}=a we can represent (a^{^})ba by
a^{^}_{^}(b).
To minimise notational ambiguities, if we have a "point-dependant" multivector a_{p} we will sometimes
put the positional suffix to the right of the subscripted operator symbol as, for example,
a_{§}_{p}(b) º (a_{p})_{§}(b) º a_{p}ba_{p}^{§} .
Lifts
For a given e_{-} in Â_{p,q} with q>0 we can define a e_{-}-lift
Lift_{e-}: Â_{p,q} ® Â_{p+1,q-1} taking ae_{-} + b_{1}e_{1}+..
in Â_{p,q} to ae_{-}' + b_{1}e_{1}'+... in Â_{p+1,q-1}
where e_{-}', e_{1}',e_{2}', is a basis for Â_{p+1,q-1} with e_{i}'¿e_{j}' = e_{i}¿e_{j}
but e_{-}'^{2} = -e_{-}^{2}. We think of moving to the "same" multivector in a variant alegbra in which e_{-}
has positive signature. When q=1 (or when any other negative signatured basis vectors are
the extendors of a higher dimensional embedding) we refer to a Euclidean Lift. Obviously we
have Lift_{e-}^{-1} : Â_{p+1,q-1} ® Â_{p,q} moving back again
and we can construct e_{-}-dependant operations on Â_{p,q} multivectors by lifting them into
Â_{p+1,q-1}, applying an operation F:
Â_{p+1,q-1} ® Â_{p+1,q-1}
, and then lifting back to Â_{p,q}. Formally, we compute Lift_{e-}^{-1}(F( Lift_{e-}(a))) . Informally,
we simply treat e_{-} as having positive signature throughout our computations.
Conjugations
A conjugation flips the signs of the coefficients of specific blades of a multivector.
For any conjugation ^{^} we have a^{^}^{^}=a " a which we can write as ^{^}^{2}=1.
We say a conjugation ^{^} is a reversing conjugation (aka. an anti-automorphism)
if (ab)^{^} = (b^{^})(a^{^}) .
We say a conjugation ^{^} is an automorphic conjugation (aka. an automorphism or nonreversing)
if (ab)^{^} = (a^{^})(b^{^}) .
We say a conjugation is semireversing if it is neither reversing nor automorphic.
A reversing or automorphic conjugation is completely specifed by its action on scalars and 1-vectors
and obeys
(a^{k})^{^} = (a^{^})^{k} for integer k and thus
satisfies the "exponent hopping" rule (e^{a})^{^} = e^{a^}
or a^{↑}^{^} = a^{^}^{↑} " a
where exponentiation is as defined below.
All conjugations of interest here preserve 0-blades (scalars), save the negation conjugation.
Conjugations are often best implemented
by the direct sign flipping of appropriate components in the given basis rather than
a "formal" product evalution.
We will see that there are four fundamental conjugations, one of which is frame-dependant, from which
sixteen composite conjugations may be constructed. These provide 112 distinct bilinear multivector products in addition to ¿, Ù and so forth
such as a^{§}b or a^{#}b^{†} . If we also comsider products such as
a^{§}Ùb^{§} for products ¿, ë, Ù, _{*}, and × as well as geometric product we obtain
6×112 bilinear multivector products!
Identity
The identity conjugation has no effect, leaving all blades unaltered. We can denote it by operator 1
or as conjugations ^{1} or, better, ^{=} .
1(a) º a^{1} º a^{=} º a " a .
We could represent aba by a_{1}(b) since a^{1}=a but
this is notationally dangerous since we often use _{1} as a suffix so
aba º a_{=}(b) is preferable.
Negation
The negation conjugation negates every blade. It is the only conjugation that negates scalars.
We denote the negation of a by -a rather than by a^{-} .
Negation is neither reversing nor automorphic, with (-a)(-b)=ab.
Reverse Conjugation
We define a to be a k-versor if it is the geometric product of k nonzero 1-vectors
a=a_{1}a_{2}...a_{k}. If k is even we say a is an even versor.
If a_{1},a_{2},..a_{k} are mutually orthogonal then
a_{1}a_{2}..a_{k} =
a_{1}Ùa_{2}Ù..a_{k} and so, with regard to vectors known to be orthogonal, versors are equivalent to blades.
Any k-blade a_{1}Ùa_{2}Ù...Ùa_{k} can be expressed as a k-versor a_{1}'a_{2}'...a_{k}'
where a_{1}'=a_{1} , a_{2}' = ^_{a1'}(a_{2}) ,...
a_{i}'= ^_{a1Ùa2Ù...Ùai}(a_{i}) ,...
is but one suitable choice of 1-vector factors.
[ ^ denotes the rejection operator defined later. ]
We call a^{§} = a_{k}...a_{2}a_{1}
the reverse conjugation of versor a and to accomodate general multivectors we
define the reverse of k-blade b_{k}=b_{1}Ùb_{2}Ù...b_{k}
by
b_{k}^{§} º (-1)^{½ (k-1)k}b_{k}
=b_{k}Ù...Ùb_{2}Ùb_{1}
and can extend ^{§} to general a as the sum of the reversals of its component blades.
^{§} thus "reverses" both versors and blades.
Since all k-blades reverse with the same sign change (regardless of signatures), we have
a^{§} = å_{k=0}^{N} a_{<k>}^{§}
= å_{k=0}^{N} (-1)^{½ (k-1)k}a_{<k>}
making reversals very simple to compute. We avoid, in particular,
the need to factorise.
The sign sequence (-1)^{½ (k-1)k} k³0
= (+,+,-,-, +,+,-,-, +,+,-,- .....)
is fundamental in multivector analysis. For N<6, only 2-vectors and 3-vectors change sign
under reversal. For N=1,4,5,8,9,12,13,... we have i^{§}=i.
We write a_{<+§>} for the components of a unchanged by ^{§},
and a_{<-§>} for the those components which change sign under ^{§}.
We then have
a^{§} = a_{<+§>} - a_{<-§>}
In a Euclidean multivector space Â_{N}, the reversed multivector a^{§} corresponds to the transpose A^{T} of the matrix representation of a.
Clearly (ab)^{§} = b^{§}a^{§} so ^{§} is a reversing conjugation. For any reversing conjugation ^{^} we have (a^{§})^{^} = (a^{^})^{§} which we can consider as ^{^} "commuting" with ^{§} and write as ^{^}×^{§}=0 .
In the biquaternian Â_{3} model, reverse is known as complex conjugation. If b=a_{0}+b_{0}i +(a_{1}+b_{1}i)e_{1} +(a_{2}+b_{2}i)e_{2} +(a_{3}+b_{3}i)e_{3} we have b^{§} = (a_{0}-b_{0}i) +(a_{1}-b_{1}i)e_{1} +(a_{2}-b_{2}i)e_{2} +(a_{3}-b_{3}i)e_{3} .
b(b^{§}) = (a_{0}+a_{1}e_{1}+a_{2}e_{2}+a_{3}e_{3})^{2} + (b_{0}+b_{1}e_{1}+b_{2}e_{2}+b_{3}e_{3})^{2} = (a_{0}^{2}+a_{1}^{2}+a_{2}^{2}+a_{3}^{2}+ + b_{0}^{2}+b_{1}^{2}+b_{2}^{2}+b_{3}^{2}) + e_{1}2(a_{0}a_{1}+b_{0}b_{1}) + e_{2}(..) + e_{3}(..)
Rotors
A rotor is an even multivector that satisfies
aa^{§} =
(aa^{§})_{<0>} º
±|a|_{§}^{2} ¹ 0
and is a unit rotor if |a|_{§}=1 .
If a rotor is also a 2k-versor we call it a k-rotor.
Any k-versor of nonnull 1-vectors is a rotor but not all rotors are versors,
eg. a+be_{123} is a rotor (with |a+be_{123}|^{2} = a^{2}-b^{2})
that has no versor form.
Involution Conjugation
Somewhat akin to ^{§} is the main involution conjugation defined by
a^{#} º å_{k=0}^{N} (-1)^{k}a_{<k>}
or, equivalently,
a^{#} º a_{<+>} - a_{<->}
.
We have (a^{#})^{#} = a as for all conjugations.
We say a multivector a is even if a^{#} = a = a_{<+>},
odd
if a^{#} = -a = -a_{<->}, and mixed otherwise.
For l-blade b_{l} ,
aÙb_{l} = ½(ab_{l} + b_{l}^{#}a)
;
a¿b_{l} = ½(ab_{l} - b_{l}^{#}a).
(ab)^{#} = (a^{#})(b^{#}) so ^{#} is a automorphic conjugation.
Clifford Conjugation
Clifford conjugation ^{©} = ^{§}^{#} is the combination of reverse and main involution
a^{©} º (a^{§})^{#} = (a^{#})^{§} so that
a_{<k>}^{©}
= (-1)^{½(k-1)k + k}a_{<k>}
= (-1)^{½(k+1)k + k}a_{<k>}
It negates grades 1,2,5,6,9,10,13.... .
We will seldom use the ^{©} symbol, favouring the more explicit ^{§}^{#}.
In the biquaternian Â_{3} representation
(a_{0}+b_{0}i)
+(a_{1}+b_{1}i)e_{1}
+(a_{2}+b_{2}i)e_{2}
+(a_{3}+b_{3}i)e_{3}
=
a_{0} + a_{1}e_{1}+a_{2}e_{2}+a_{3}e_{3}
+ b_{1}e_{23}+b_{2}e_{31}+b_{3}e_{12}
+ b_{0}e_{123}
the Clifford conjuation is
((a_{0}+b_{0}i)
+(a_{1}+b_{1}i)e_{1}
+(a_{2}+b_{2}i)e_{2}
+(a_{3}+b_{3}i)e_{3})^{§}^{#}
= (a_{0}+b_{0}i)
-(a_{1}+b_{1}i)e_{1}
-(a_{2}+b_{2}i)e_{2}
-(a_{3}+b_{3}i)e_{3}
and is known as vector conjugation.
Since s_{i}^{D}=- s_{i} while 1^{D}=1 , Â_{3} Clifford conjugation
corresponds to the conventional matrix adjoint ^{D} in the 2×2 complex matrix representation.
Thus b(b^{§}^{#}) =
(a_{0}+b_{0}i)^{2} -(a_{1}+b_{1}i)^{2} -(a_{2}+b_{2}i)^{2} - (a_{3}+b_{3}i)^{2}
is the conventional (complex number) determinant of the 2×2 complex matrix representation of b.
We will here refer to the restrictive products a©_{+}b º (ab)_{<1,2,5,6,9,10,..>}
and a©_{-}b º (ab)_{<0,3,4,7,8,...>} as the real Clifford,
and imaginary Clifford product.
Mitian Conjugation
This automorphic conjugation is specific to a given orthonormal frame. All our multivector operations
thus far have been coordinate-invarient but, in non-Euclidean cases,
this one is not.
Suppose we have an orthonormal
frame { e_{i}} for Â_{p,q,0} with e_{i} for p < i £ N having
negative signature. We define (and here name) the automorphic mitian conjugation of b by
b^{§} º e_{(p+1)..N}_{§}_{#}(b)
º e_{(p+1)..N} b e^{N..(p+1)}
so that, in particular, 1^{§} =1 ; e_{i}^{§} = (-1)^{q}e_{i} i£p
; (-1)^{q-1}e_{i} i>p ,
negating only the positive signature basis 1-vectors for odd q; and only the negative signature
basis 1-vectors for even q.
^{§} of a basis blade thus represents the (anti)commution of that blade with
the particular negative-space spanning pseudoscalar e_{(p+1)..N}.
Unlike other conjugations which flip all k-blades the same way,
a^{§} ¹ ± a for general 1-vector a.
The mitian conjugation is the identity conjugation in a Eucliean space.
In a nonEucliean space
it is of principle interest in constructing the more commonly seen Hermitian conjugation beloved of physicists.
From a programming perspective, negation, involution, reversion, and mitian conjugation are the four fundamental
conjugations from which all other spacially invarient reversing or automorphic conjugations may be constructed.
Hermitian Conjugation
Like mitian conjugation, the reversing Hermitian conjugation (aka. conjugation or modulatoruy conjugation
is frame-dependant.
b^{†} º b^{§}^{[-]}
where ^{[-]} denotes the frame-dependant negation of the negative signatured basis 1-vectors corresponds for odd N
when i^{2}=-1 to matrix Hermitian conjugation (B^{†} º B^{T}^{^}) of the matrix representation.
For odd N, ^{†}=^{§}^{#}^{§} with the ^{#} serving to negate the effects of the ^{§} on the basis 1-vectors.
For even N we have _hm=^{§}^{§}.
In Â_{p,q} we can define ^{†}=^{§}^{#}^{§} for odd q and
^{§}^{§} = ^{†}^{#} for even q so that it always negates the negative signatured basis 1-vectors. In a Euclidean space, _mdl = ^{§} is frame-independant reversion.
Our motivation for _htm is that
a_{k}^{†}a_{k} = 1 for any unit k-blade (a_{k}^{2}=± 1) whose 1-vector components
satisfy a^{†} = ± a; and (a_{k}^{†}a_{k})_{<0>} > 0 for all nonzero a_{k} , even when a_{k}^{2}=0 .
[ Proof :
Suppose a_{k} has i component 1-vectors of negative signature and k-i of positive signature.
Then for odd q
a_{k}^{§}^{§}^{#}a_{k}
= e_{(p+1)..N} a_{k}^{§}^{#} e^{N..(p+1)} a_{k}
= e_{(p+1)..N} (-1)^{k}a_{k}^{§}a_{k}^{§} e^{N..(p+1)}
= e_{(p+1)..N} (-1)^{k+(k-i)}a_{k}^{§}a_{k} e^{N..(p+1)}
= e_{(p+1)..N} (-1)^{k+(k-i)+i} e^{N..(p+1)}
= 1 ;
while even q we have
a_{k}^{§}^{§}a_{k}
= e_{(p+1)..N} a_{k}^{§} e^{N..(p+1)} a_{k}
= e_{(p+1)..N} (-1)^{i}a_{k}^{§}a_{k} e^{N..(p+1)}
= e_{(p+1)..N} (-1)^{2i} e^{N..(p+1)}
= 1 .
.]
Thus (a^{†}a)_{<0>}
= å_{k=0}^{N} a_{k}^{†}a_{k}
= å_{k=0}^{N} |a_{k}^{§}a_{k}|
provides a frame-dependant positive-definite real-valued measure known as the squared modulus of the multivector.
^{†} provides the
inverse frame defined below with e_{i}^{†} = e^{i} = e_{i}^{-1} for unit e_{i}.
A general vector
v = å_{i=1}^{N} v^{i}e_{i} then has
v^{†} = å_{i=1}^{N} v^{i}e^{i} and hence
(a^{†})¿b = a¿(b^{†}) = a¿_{+}b .
aa^{†} ³ 0 is then the forced Euclidean length of a (zero only for a=0).
In a Euclidean space, hrm = ^{§} so Euclidean Hermitian cojugation is reverse.
Nullvector e_{-}+e_{1} where e_{1}¿e_{-}=0 has (e_{-}+e_{1})^{†}(e_{-}+e_{1})=2 + e_{1}Ùe_{-} for Hermitian conjugation with respect to e_{-}.
Setting unit minussquare 1-vector e_{-}' = (e_{-}±ae_{2})(1-a^{2})^{-½}
for aÎ(0,1) we have e_{-} + e_{1} = (1-a^{2})^{½}e_{-}' -/+ ae_{2} + e_{1}
and (e_{-}+e_{1})^{†}'¿(e_{-}+e_{1}) still equals 2 where ^{†}' is Hermitian conjugation with respect to e_{-}'.
However, setting minussquare
e_{-}' = (e_{-}±ae_{1})(1-a^{2})^{-½}
for aÎ(0,1) we have e_{-} + e_{1} = (1-a^{2})^{½}e_{-}' + (1-/+a)e_{1}
and (e_{-}+e_{1})^{†}'(e_{-}+e_{1})
= 2(1-/+a) + (1-a^{2})(1-/+a^{2})e_{1}Ùe_{-}'
= 2(1-/+a) + (1-a^{2})^{½}(1-/+a^{2})e_{1}Ùe_{-}
where ^{†}' is Hermitian conjugation with respect to e_{-}'.
This shows that by choosing frames appropriately we can make the positive Hermitian magnitude of a given null vector
as close to zero as we like but cannot acheive more than the forced Euclidean magnitude.
Likewise the unit 1-vector v=2e_{1}+e_{-} has frame invariant magnitude
|v| = |v¿v|^{½} = 3^{½} but
its e_{-}-relative modulus |v|_{+} =Ö(4+1)=5^{½} is frame specific . With regard to an alternate frame
{e_{+}'=3^{-½}v , e_{-}'=(e_{+}e_{-})e_{+}'} v is expressed as 3^{½}e_{+}' and attains
minimal modulus 3^{½}.
Thus the Hermitian modulus |a^{†}a|^{½} is frame dependant even for null a.
We say a multivector is Hermitian if a^{†} = a with regard
to a particular frame of interest. In a Minkowski space Â_{N-1,1} this coresponds to
e_{4}¿a =0 where e_{4} is the negative signatured frame vector.
We say a multivector is unitary if a^{†} = a^{-1} in a
particular frame of interest.
Given a central (all-commuting) i with i^{2}=-1 and i^{†}=-i,
any unitary (a^{^}^{T}=a^{-1} ; a^{†}=a^{-1}) multivector can be expressed as (iH)^{↑} º
e^{iH} where H is Hermitian (H^{^}^{T}=H ; h^{†}=h).
e-negating Conjugation
We define frame-dependant e_{4}-negating conjugation ^{†} to be the automorphic
( ((ab)^{†}=(a^{†})(b^{†}) )
conjuation negating only basis blades containing basis 1-vector e, irrespective of grade.
Typically e=e_{4} of negative (timelike) signature.
Extension Conjugations
The extension conjugations arise when we embedd U_{N} in a higher dimensional space by extending
a given basis by postulating k further 1-vectors orthogonal to the U_{N} basis, as wil be described later.
Extension conjugation ^{[-]} is the automorphic conjugation that negates only the "new" basis 1-vectors,
^{[+]} is the automorphic conjugatuion that negates only the new basis vectors of negative signature.
By combining the extension conjuation with involution ^{#} we create the unextended involution ^{(#)}
that negates blade b if ¯_{i}(b) is of odd grade where ¯ denotes projection into
the U^{N}
Third bit Conjugation
Involution changes the sign of blades of odd grade. Reverse changes the sign of blades whose grade
has the second bit (bit 1) set. Semireversing third bit conjugation negates blades of grade
having the third bit set, ie. blades of grade 4,5,6,7, 12,13,14,15, 20,21,... .
Clearly we have fourth and fifth and higher bit conjugations for suficiently large N.
Dualed Conjuations
Any conjugation ^{^} induces a dualled conjuation
a^{^}^{*} º (ai^{-1})^{^}i. Unlike undualled conjuations, the effect
of ^{^}^{*} on a k blade depends on N as well as k, but not, note, on the signature i^{2}.
Conjugation tabulations
Effect of conjugations and other operations on illustrative blades (e_{4}^{2}=-1) | |||||||||||||
Symbol | Name | 1 | e_{1} | e_{12} | e_{123} | e_{4} | e_{14} | e_{124} | e_{1234} | e_{12345} | e_{123456} | e_{1234567} | k-blade x |
^{§} | reverse | + | + | - | - | + | - | - | + | + | - | - | (-1)^{½k(k-1)}x |
^{#} | involution | + | - | + | - | - | + | - | + | - | + | - | (-1)^{k}x |
^{§}^{#} | Clifford | + | - | - | + | - | - | + | + | - | - | + | (-1)^{½k(k+1)}x |
^{§} | mitian | + | - | + | - | + | - | + | - | + | - | + | -e_{4}xe_{4} |
^{†} | Hermitian | + | + | - | - | - | + | + | - | - | + | + | -e_{4}x^{§}^{#}e_{4} |
^{*} | dual (Â_{3}) | -e_{123} | -e_{23} | e_{3} | 1 | . | . | . | . | . | . | . | -xe_{123} |
^{*} | dual (Â_{3,1}) | -e_{1234} | -e_{234} | -e_{34} | e_{4} | -e_{234} | -e_{23} | -e_{3} | 1 | . | . | . | -xe_{1234} |
^{*} | dual (Â_{4,1}) | -e_{12345} | -e_{2345} | -e_{345} | e_{45} | -e_{2345} | -e_{235} | -e_{35} | e_{5} | 1 | . | . | -xe_{12345} |
^{-1} | inverse | + | + | - | - | - | + | + | - | - | + | + | x^{-2} x |
^{2} | square | +1 | +1 | -1 | -1 | -1 | +1 | +1 | -1 | -1 | +1 | +1 | x^{2} |
If we ennumerate the 32 extended basis blades for e_{1234} then for each
of the four primary conjugations we can construct a 32-bit bitmask with 1 in bit i indicating that
the blade of index i flips sign under the conjugation. For the
1,e_{1},e_{2},e_{12},e_{3},e_{13},e_{23},e_{123},e_{4},e_{14},... bitwise basis ordering
these values are 0xFFFFFFFF for negation,
0x177E7EE8 for reverse, 0x96696996 for involution, and 0x69699696 for mitian, the latter value
assuming e_{4} to be the only minussquare basis 1-vector.
We can construct the flags for composite conjgations by xoring (exclusive-orring)
the flags of the contribuory primaries. Hermitian is accordingly 0xE87E81E8 and Clifford 0x8117177E.
With this ordering, basis blade e_{[.i.]} flips sign under ^{#} whenn (i & 1) ¹ 0
and under ^{§} whenn (i & 2) ¹ 0
Directation
Directation takes 1-vector a to a if a¿e_{-} < 0 ; to -a
if a¿e_{-} > 0 ; or to 0 if
a¿e_{-} = 0 , for a given e_{-} with e_{-}^{2} < 0. It is not a conjuation since (a+b)^{[]} ¹ a^{[]} + b^{[]} in general
and it is defined only for 1-vectors . However, ^{[]} is frame invarieant in asmuch as any alternate e_{-}' with e_{-}¿e_{-}' < 0 will
give the some directation.
Scalar Measures and Normalising
There are a surprising number of alternate meaningful "size values" for multivectors.
Associated with any conjugation ^{^} we have a scalar measure a^{^}_{*}a
= (a^{^}a)_{<0>} of a general multivector a and if this is nonzero
we can define ^{^}-normalisation
a^{^} º a |(a^{^}a)_{<0>}|^{-½}
so that
a^{^}^{^}_{*}a^{^} = ± 1 .
We define sclar |a|_{^} º |(a^{^}a)_{<0>}|^{½} .
Unitisation
If a^{2}_{<0>}¹0 we define the
unitised multivector
a^{=} º a
|a^{2}_{<0>}|^{-½}
so that ((a^{=})^{2})_{<0>} = ± 1.
A multivector a is said to be unit if (a^{2})_{<0>} = ± 1.
Magnitude
We define the real positive scalar magnitude or length of k-vector a_{<k>} by
|a_{<k>}| = |a_{<k>}^{§}¿a_{<k>}|^{½} = |a_{<k>}¿a_{<k>}|^{½}
= |(a_{<k>}^{2})_{<0>}|^{½} .
We here follow Hestenes and Sobczyk and extend to general multivector a as
|a| º Ö(å_{k=0}^{N} |a_{<k>}|^{2}) .
Some authors favour
|a| º å_{k=0}^{N} |a_{<k>}| which is a fundamentally different measure
when applied to mixed grade multivectors.
The magnitude of a scalar is thus the absolute (unsigned) value
of the scalar and the modulus of a Euclidean 1-vector is its
length Ö|a^{2}| = Ö|a¿a |
as for conventional vector methods.
In a Euclidean space, only 0 has zero magnitude.
In a non Euclidean space a multivector has zero magnitude iff
a_{<i>}¿a_{<i>} = 0 i=1,2,..N
, ie. a_{<i>} is either 0 or a null i-vector .
For Euclidean space Â_{N}, a_{<k>} a_{<k>}^{§} = | a_{<k>}¿a_{<k>} | and we have
| a |^{2} = a_{*}a^{§} =
a^{§}_{*}a = å_{k=1}^{N} (a_{<k>}a_{<k>}^{§})_{<0>}
since a_{<i>}_{*}a_{<j>}^{§} = 0 for i¹j
To accomodate negative signatures we have the weaker expression
| a |^{2} = å_{k=1}^{N} |(a_{<k>}a_{<k>}^{§})_{<0>} |
.
Hence in Euclidean space |aÙb|^{2} = a^{2}b^{2} - (a¿b)^{2}
= -(aÙb)^{2}
For N=3, |aÙb| = |a×b|
and |aÙbÙc| = | a.(b×c)| .
|a+be_{12}|^{2} = ((a+be_{12})(a+be_{12}^{§}))¿1
= ((a+be_{12})(a-be_{12}))¿1
= a^{2} + b^{2} so modulus in Â_{2 +} is equivalent to
traditional "modulus" of a complex number.
If a or b is null then |aÙb|^{2} = (aÙb)^{2} = (a¿b)^{2}
Conjugated Normalisation
If (a^{^}a)^{-½} exists for a particlar conjuation ^{^} , then we have
^{^}-normalisation
a ^{~^} º (a^{^}s)^{-½}a
satisfying (a ^{~^})^{^}(a ^{~^}) = 1 . If a particular i has
i^{^}i = -1 then provided ^{^} is reversing or i commutes with a we can form
^{^}-antinormalisation
a ^{~-^} º -ia ^{~^})
satisfying (a ^{~-^})^{^}(a ^{~-^}) = -1 .
We include the negation to make antinormalisation correspond to dual when i=i.
If ^{^} is reversing, (a^{^}_ma(a))^{^} = a^{^}a so a^{^}a contains only blades preserved by ^{^}. Thus if i^{^}=-i the a^{^}a has no i component.
Normalisation
If (a^{§}a)_{<0>} ¹0 we define the normalised multivector
a^{~} º
a^{§} º
a |(a^{§}a)_{<0>}|^{-½}
so that |a^{§}|=1 and a^{~}^{§}_{*}a = ± 1.
A multivector a is normalised if |a|=1 .
For non Euclidean spaces, normalisation and unitisation of multivectors
are distinct concepts that coincide for pure k-blades.
This definition neglects the pseudocalar component of a^{§}a and would be typically used in
an algebra with i^{^}=-i.
If (a^{§}a)_{<0>} ¹0 we define the normalised multivector
a^{~} º
a^{§} º
a |(a^{§}a)_{<0>}|^{-½}
so that |a^{§}|=1 and a^{~}^{§}_{*}a = ± 1.
A multivector a is normalised if |a|=1 .
For non Euclidean spaces, normalisation and unitisation of multivectors
are distinct concepts that coincide for pure k-blades.
This definition
Modulus
We here define the postive real scalar modulus of a multivector as a "forced Euclidean" magnitude.
We can only define the modulus with respect to a given orthonormal frame. The modulus is defined by
|a_{k}|_{+} º (a_{k}¿_{+}a_{k})^{½} º (a_{k}^{†}a_{k})_{<0>}^{2}
with |a|_{+} = ( å_{k=0}^{N} |a_{<k>}|_{+}^{2} )^{½}
.
The modulus is thus computed by adding the squares of each coordinate
|a|_{+} = Ö å_{ijk..}(a^{ijk..})^{2}
when expressed with respect to the favoured extended frame.
In a Euclidean space, the modulus is equivalent to magnitude and so frame independant. Modulus
acts "across grades"
in the same way as does the magnitude so often coincides with
the traditional "complex modulus"
|x+iy|_{+}º
((x+iy)(x-iy))^{½}=(x^{2}+y^{2})^{½} .
Geometrically speaking, if we have a multivector of the form a + bb_{k} for some nonnull
k-blade b_{k} with b_{k}^{2}<0
then |a + bb_{k}|_{+} = a^{2} + b^{2}|b_{k}^{2}|
= (a+bb_{k})(a+bb_{k})^{^} where ^{^} is any conjugation satisfying
b_{k}^{^}=-b_{k}.
No multivector has zero modulus except 0, even in a non-Euclidean space.
Selfscale
If a^{2} = la for central (ie. universally commuting) l
(typically a scalar or a "complex" scalar-pseudoscalar pair)
we say a is
a l-selfscaling multivector.
We refer to l = |a|_{s} as the selfscale of a and note that if real, it may be negative.
Any central multivector is its own selfscale and in particular any scalar a is a selfscaler of selfscale
a.
If a has selfscale l then either a=l or a is non-invertible.
If a and b are (anti)commuting selfscaling then
|ab|_{s} = (-) |a|_{s}|b|_{s} .
If a_{<0>}¹ 0 then
|a|_{s} = a^{2}_{<0>} a_{<0>}^{-1}
and more generally |a|_{s}
= (e_{ij..l}¿(a^{2})) (e_{ij..l}¿(a))^{-1}
for any blade e_{ij..l} present in a.
All null (a^{2}=0) multivectors are selfscaling with selfscale 0.
Selfscaling multivectors of selfscale (-)1 (a^{2}=(-)a) are known as (anti)idempotents.
[ Proof : a = (a^{2})a^{-1} = (la)a^{-1} Þ a=l so a¹l
Þ ! $ a^{-1}
.]
We can extend the definition of selfscale to invertible multivectors.
Scalar-Normalisation
If a_{<0>}¹0 we define the
scalar-normalised multivector
a^{~} º a(a^{2}_{<0>})^{-1}
so that (a^{~})_{<0>} = 1.
Maximal-Coordinate-Normalisation
An easy basis-specific way of "normalising" a general multivector a is to rescale it by a positive multiplier
so that its maximal-absolute coordinate a_{[¥]} becomes (close to) ±1. This provides a safe and rapid
means of rescaling a multivector to ensure that it is not so large or small as to lead to numerical or stability problems
without relying on a more computationally expensive measure that may vanish for particular a.
It is natural to denote the absolute value of the maximal absolute coordinate |a_{[¥]}|
by |a|_{¥} and refer to it as the basis-specific infinity norm .
Trace
We here define the frame-invarient scalar real trace of N-D multivector a as 2^{N}a_{<0>} corresponding to the
conventional matrix trace (sum of lead diagonal elements) of the 2^{N}×2^{N}
matrix representation of a . The complex trace is 2^{N-1} times the
trace of a complex matrix representation. Multiplication by the dimension of the matrix representation
concerned ensures the multivector trace of 1 equals the matrix trace of 1.
Traces thus depend on the dimension of the geometric space
the result is considered to "reside in", making the trace of a multivector a somewhat artificial construct
based on the more fundamental "scalar part" and "scalar-pseudoscalar part" operators _{<0>}
and _{<0;N>}.
The New Norm
The "new norm"
|x|_{!} º ((x^{§}x)_{<0>}^{2} - (x^{§}x)_{<4>}^{2} )^{¼}
appears to act a s a vlid norm for even multivectors in Â_{4,1} but this is unproven.
Determinant
The determinant of a multivector is the traditional matrix determinant of a particular
2^{k}×2^{k} matrix representation of interest.
Det(ab) = Det(a) Det(b) follows from the corresponding matrix result.
Det(a)= Det(a1)=a^{2k} so
Det(-1)= Det(1)=1 and so Det(lb)=±l^{2k}
for any unit blade b.
A multivector a is said to be unimodular if Det(a)=1.
A unimodular versor has aa^{§}=a^{§}a=1 .
Such square matrix represetations can have Â or C or Q elements when k<N
and we accordingly then have Â or C or Q valued determinants.
A scalar valued determinant is always provided by the 2^{N}×2^{N}
real matrix representation of a multivector a and it is this determinant that we will usually denote
denote Det(a).
If ¦(x)º Ax(A^{^}) has A^{^}A scalar so thet ¦ is a grade-preserving linear transform then we define scalar Det(¦) º (¦(i))^{*} so Det(¦)^{2N} = (A^{^}A)^{N}
The complex matrix identity Det(A) = e^{Trace(ln(A))} has corresponding geometric form Det(a) = (2^{N-1}ln(a))_{<0;N>}^{↑} given odd N and a minussquare pseudoscalar.
One way to compute the invese of a multivector is to
solve ax=1 as 2^{N} simultaneous linear equations in 2^{N} unknowns, which is possibile iff
the determinant of a is non zero.
An immediate consequence of this, using the mathematical result that the determinant of any matrix having
two identical rows is zero, is that mixed multivectors of the form 1+e_{i} are
noninvertible. Furthermore (from standard matrix results of the determinant of a product being the product
of the determinants) we can deduce that if a is noninvertible, so too are ab and ba
for any multivector b.
If a^{2}=la then either a=l or a is noninvertible, so
Det(a)=0 when selfscale |a|_{s} is defined.
Oppositioning
For every restrictive geometric product ¨ we have a frame-dependant oppositional difference
a¨_{¦}b =
å_{ij..}
å_{kl..}
¦(a^{ij..},b^{kl..})
e^{ij..}¨e^{kl..}
where ¦ : Â^{2} ® Â is usually a symmetric function.
Essentially we evalute the restricted geometric product as normal but instead of multiplying the blade weightings
we use an alternate symmetric function ¦ : Â^{2}®Â .
Though looking somewhat like a product (and programmable by generalising a
multivector -product primative) ¨_{¦} is not usually bilinear.
A particularly useful oppositional difference is provided by ¨=_{*} ;
¦(a,b) = |a-b| if ab<0 , and 0 else .
This yields a measure of how much "sign changing" occurs in the coordinates
when moving from a to b, useful when searching for minima or maxima
of multivector valued functions and when comparing spot gradients in
adaptive refinement integrational techniques.
Inverses and Powers
We now consider some important multivector-valued functions of multivectors.
Inverse
We define the inverse or reciprocal of a Î Â_{N} by
a^{-1}a = aa^{-1} = 1 , if such exists.
We say a multivector is noninvertible (aka. singular) if no inverse exists.
We define the extended inverse of a Î Â_{N} to be a^{-1}
if such exists and a otherwise.
A nonzero 0-vector a has 0-vector inverse a^{-1} = 1/a.
Zero has extended inverse zero.
A nonnull 1-vector a has 1-vector inverse a^{-1} = a/a^{2} .
A null vector usually has a distinct 1-vector inverse, eg. (e_{+}+e_{-})^{-1} = 2^{-½}(e_{+}-e_{-}).
A nonnull 2-vector a usually has inverse (a^{2})^{-1} a but this fails for N=4 with i^{2}=1 if
a^{2}=b+±bi .
More generally
a^{2} is invertible provided (a¿a)^{2} - (aÙa)^{2} is invertible
.
For k-versor a_{k}=a_{1}a_{2}...a_{k} ,
a_{k}(a_{k}^{§}) = (a_{k}^{§})a_{k} =
a_{1}^{2}a_{2}^{2}...a_{k}^{2}
is a 0-vector so we have
a_{k}^{-1} = a_{k}^{§} /
a_{1}^{2}a_{2}^{2}...a_{k}^{2}
provided
a_{1}^{2}a_{2}^{2}...a_{k}^{2} ¹ 0.
But a_{k}^{§} = (-1)^{½ k(k-1)}a_{k} ,
so we have relatively rapid computation of versor inverses via
a_{k}^{-1} = (-1)^{½ k(k-1)} (a_{1}^{2}a_{2}^{2}...a_{k}^{2})^{-1} a_{k}
In particular, the inverse of versor
a + b(e_{i}Ùe_{j}) = e_{i}(ae_{i}^{-1} + be_{j})
is (ae_{i}^{-1} + be_{j})e_{i} /e_{1}^{2}(ae_{i}^{-1} + be_{j})^{2}
= (a - b(e_{i}Ùe_{j})) / (a^{2} + b^{2}) for Euclidean contraction.
Inverse is somewhat like a reversing conjugation in that acting on a single (unit) blade it will at most
flip its sign, and (ab)^{-1} = b^{-1}a^{-1}.
However as (a+b)^{-1} ¹ a^{-1} + b^{-1} in general and a^{-1} may be undefined
inverse is decidely not a conjugation.
If a^{^}a is easily invertible for a particular conjugation ^{^} then
a^{-1} = (a^{^}a)^{-1} a^{^} provides a comparatively efficient
inverse computation.
Further, if ^{^} is reversing or automorphic then the existance of a^{-1} implies the existance of
(a^{^})^{-1} = (a^{-1})^{^} and hence that of (a^{^}a)^{-1} = a^{-1}(a^{^})^{-1} .
Thus we can determine the invertibility of a by examination of a^{^}a
for any reversing or automorphic conjugation ^{^} , and may also be able to find a^{-1} by consideration
of a^{^}a for semireversing ^{^}.
Similar arguments hold for aa^{^} with a^{-1} = a^{^} (aa^{^})^{-1}
For N=3 (any signatures), Clifford conjugation ^{^} = ^{§}^{#} is suitable since
a(a^{§}^{#}) = (a(a^{§}^{#}))_{<0;N>} is a
scalar-pseudoscalar pair and so easily invertible when nonsingular. This method is known as the
Lounesto Inverse.
An even multivector a has
a^{§}^{#}a = a^{§}a of grade <0,4,8,12,...> so, since
for N£5 any 4-vector is a 4-blade, we can easily invert nonsingular even multivectors
when N£5.
More generally
a^{^}a
= a_{<^+>}^{2} - a_{<^->}^{2} + 2a_{<^->}×a_{<^+>}
and we can remove central grades from the commutator product.
Thus for ^{^} = ^{#}^{§} we have
a^{#}^{§}a
= a_{<0,3,4,7,8,..>}^{2} - a_{<1,2,5,6,9...>}^{2} +
2a_{<1,2,5,6,9..>}×a_{<3,4,7,8,..>}
with the commutator product term vanishing for N£3.
For even a and N=5 we have
a^{#}^{§}a
= a^{§}a
= a_{<0,4>}^{2} - a_{<2>}^{2} +
2a_{<2>}×a_{<4>} .
If a and b commute then (a+b)(a-b) = a^{2} - b^{2} so that (a+b)^{-1} = (a-b)(a^{2} - b^{2})^{-1} if such exists. If a and b anticommute we have (a+b)^{2} = a^{2} + b^{2} so that (a+b)^{-1} = (a+b)(a^{2} + b^{2})^{-1} if such exists.
The problem of inverting a multivector can be recast into
the problem of finding the matrix inverse of its matrix representation,
but while theis may allow exploitation of existing code libraries, it is a
brute force approach. Spotting whether a^{^}a for a general extended coordinate representation a
is itself invertible typically involves deciding whether higher
grade components lie within an acceptable error tolerance of zero.
Integer Powers
The geometric square of multivector a is
a^{2} º aa .
We can define the k^{th} power of a multivector for integer k
in the obvious way by defining
a^{0} = 1 ; a^{k} = a(a^{k-1}) for k > 0 ;
a^{k} = (a^{-k})^{-1} for k < 0 provided inverse
(a^{-k})^{-1} exists.
We say a multivector is squarepure or scalar squared if its square is
a pure scalar. We say a is nilpotent or null if
a^{2}=0.
Pure scalars and 1-vectors are squarepure.
General bivectors are squarepure only for N£3. Pure Â_{N} bivectors
square to negative scalars for N£3 but for N>3
b_{2}^{2} can have 4-vector components.
If ab=ba=0 then (a+b)^{k} = a^{k} + b^{k} for k>0.
If a and b commute we binomial theorem (a+b)^{k} = å_{j=0}^{k} ^{k}C_{j}a^{j}b^{k-j} .
If a and b anticommute then we have (by induction on k):
(a±b)^{2k} = (a^{2}+b^{2})^{k}
and (a±b)^{2k+1} = (a±b)(a^{2}+b^{2})^{k}
with a^{2} and b^{2} commuting,
Square Roots
A square root of a multivector x is any multivector x^{½} satisfying (x^{½})^{2} = x .
If we seek an x^{½} which we have reason to believe
(perhaps because x satisfies x^{^}x = a^{2})
will satisfy
(((x^{½})^{^})(x^{½}) = a
for some conjugation ^{^} and a known simpler multivector a with a^{^} = a
then since
x^{½} y = a + x
where y = (x^{½})^{^} + x^{½} satisfies y^{^} = y and is typically of fewer nonzero grades than
x and x^{½}, we can look for a square root for x of the form (a+x)y^{-1} where
y satisfies y^{^} = y and y^{2} = 2a + x^{^} +x = 2(a + x_{<^>}) .
Then we have
x^{½}
= (a+x) ( 2a + x + x^{^})^{-½}
= 2^{-½} (a+x) (a + x_{<^>})^{-½} .
reducing the squareroot to one of a known ^{^} invariant multivector 2a + x + x^{^} .
This can be seen as normalising a+x in that we are seeking an y^{-1} such that
((a+x)y^{-1})^{^} ((a+x)y^{-1}) = a^{^}a = a^{2}.
For example, if x and x^{½} satisfy x^{§}x = x^{½}^{§}x^{½} = 1 ,
then we have y restricted to grades <1,4,5,8,9,12,13...>. If x and so y are also even this reduces to <1,4,8,12,...> .
For N£7 we have the Dorst-Valkenburg rotor square root
x^{½} = 2^{-½}(1+x)(1+x_{<0>} + x_{<4>})^{-½}
and for N£5 the (1+x_{<0>} + x_{<4>})^{-½} inverse square root can be coinsidered
as of a complex, hyperbolic, or nullic number according to the signature of 4-blade
x_{<4>} .
Exponentials and Logarithms
Introduction
Multivector logarithms and exponentials facilitate the interpolation (parameterisation) of continuous
geometric transforms such as rotations and translations. Their exponentition a^{↑} º e^{a}
is simply defined and easy to compute inefficiently, given the geoemetric product, via a brute force summation.
We will here discuss strategums for computationally efficient implimentations requiring
greater mathematical sophistication.
Exponential
We define the exponential of a multivector
a as
a^{↑} º exp(a) º e^{a}
º å_{i=0}^{¥} i!^{-1} a^{i}
with e^{a} the traditional scalar exponential function when acting on scalars.
The notation e^{a} is frequently somewhat awkward for our purposes, the superscripting is
inconvenient and the letter "e" is extensively used to denote frame vectors and electron charges
so we will here move freely between e^{a} and the alternate notation
a^{↑}
, where symbol ^{↑} acts leftwards as
ab^{↑}c º a(b^{↑})c
º a(e^{b})c
as for conjugations and powers.
[ Note that the uparrow symbol [ ^{↑} ] may not print correctly from some browsers ]
For integer k³0 a^{↑+k} = a^{k}(a)^{↑} but otherwise we only have
a^{↑k} = a^{k}(a)^{↑} when a^{k} exists.
We denote a^{-↑} º (-a)^{↑} = (a^{↑})^{-1} and observe that if a^{^}=-a then a^{-↑} = a^{^}^{↑} = a^{↑}^{^} .
Regardless of the magnitude of the coordinates of a, the i!^{-1} becomes small so
rapidly with i that the infinite summation is convergent and in practice, typically only a few dozen terms
are necessary to compute a^{↑} within a given error tolerance. Thus the "brute force" summation approach is always
avaliable when computing exponentials, if only as a last resort, and a^{↑} thus exists and is computable
to within an arbitary accuracy. This may be compuationally expensive however, since for example K^{↑}
for large positive scalar K requires i > K for i! to dominate K^{i}.
For large |K| , i!^{-1}K^{i} can become uncomfortably large while i£K.
One can exploit a^{↑} = ((k^{-1}a)^{↑})^{k} where real scalar k is a
numericallty covenient value such as a power of 2 of similar magnitude to |a| .
It is natural to decompose
a^{↑} = cosh(a) + sinh(a) where
cosh(a) º ½(a^{↑} + (-a)^{↑})
= å_{i=0}^{¥} (2i)!^{-1} a^{2i}
and
sinh(a) º ½(a^{↑} - (-a)^{↑})
= å_{i=0}^{¥} (2i+1)!^{-1} a^{2i+1}
If b commutes with a then it also commutes with (la)^{↑} for any l that commutes
with a and b; while if b anticommutes with a
but commutes with l we have b(la)^{↑} = (-la)^{↑}b
= ((la)^{↑})^{-1}b.
The "exponent hopping rule" (e^{a})^{^} = e^{a^} " a
ie.
(a^{↑})^{^} = (a^{^})^{↑} " a
for a reversing conjugation ^{^}
can be written as ^{^}^{↑} = ^{↑}^{^} .
For any puresquare multivector [ a^{2} = ±|a|^{2} ]
we have
a^{↑} = e^{a} = | cos(|a|) + a^{~} sin(|a|) | if a^{2} < 0 ; |
cosh(|a|) + a^{~} sinh(|a|) | if a^{2} > 0 ; | |
1+a | if a^{2} = 0. |
If a and b commute then so do a^{↑} and b^{↑} and
(a+b)^{↑} = a^{↑} b^{↑}
(with the coefficient of a^{i}b^{j} given by
(i+j)!^{-1} ^{i+j}C_{i} = i!^{-1} j!^{-1} )
butin general (a+b)^{↑} ¹ (a^{↑})(b^{↑}) .
If we can express a multivector a as the sum of commuting parts than we can simply the computatiuon by
forming the product of the exponentials of those parts. This is true for bivector a for N£
If a and b anticommute we have (by induction on k): (a±b)^{2k} = (a^{2}+b^{2})^{k} and (a±b)^{2k+1} = (a±b)(a^{2}+b^{2})^{k} so that
More generally we have a^{2} = a^{2} for central a giving (la)^{↑} = ( 1 + a^{2}½l^{2} + ... ) + (l + a^{2}l^{3}3!^{-1} + ...)a = cosh(al) + a^{-1} sinh(al)a where cosh(x) º ½(x^{↑} + (-x)^{↑}) ; sinh(x) º ½(x^{↑} - (-x)^{↑}) are the traditional complex hyperbolic functions if i^{2}=-1.
In particular, for unit multivector a , (fa)^{↑} (ya)^{↑} = ((f+y)a)^{↑} = (f+y)^{↑} a^{↑} for any central y,f.
We have a geometric form of De Moivre's formula ((a)^{↑})^{k} = (ak)^{↑} for integer k [ Usually written (e^{iq})^{k} = e^{(iqk)} ]
(d/df) (laf)^{↑}
= la(laf)^{↑} .
If a^{2}=-1 then (d/df) (laf)^{↑} = l(a(lf+p/2))^{↑} .
Whenever i^{2} = -1 we have a multivector version of Euler's equation:
(ip)^{↑} º e^{ip} = -1 .
Also e^{ip}a=a_{<+>}e^{ip}+
a_{<->}e^{-ip}
.
We have (la_{-1}(b))^{↑} º
(laba^{-1})^{↑} = a(lb)^{↑}a^{-1} º
a_{-1}((lb)^{↑}) " b ,invertible a, and central l
and the operator identity
(la)_{-1}^{↑} = 1 + (l^{2}^{↑}-1)a_{-1}
.
In Â_{N} for odd N, scalar and pseudoscalar additive combinations are central (commute with everything)
and so "factor out" in exponentiations.
Â_{3} multivector b_{<0,3>} acts as a "complex" (aka. part "imaginary")
scalar if we identify i with e_{123}.
We will write it as b_{<0,3>} to emphasise this complex scalar context.
Similarly
Â_{3} multivector b_{<1,2>} acts as a "complex" (part "imaginary")
3D 1-vector (b^{1}+b^{23}i)e_{1}
+ (b^{2}-b^{31}i)e_{2}
+ (b^{3}+b^{12}i)e_{3} .
which we denote by b_{<1,2>} to emphasise this complex 3D 1-vector context.
Since b_{<0,3>} commutes with complex b_{<1,2>}, exponential e^{b} factors as
e^{b} = e^{b<0,3>} e^{b<1,2>}
= e^{b<0,3>}(
cosh(|b_{<1,2>}|)
+ sinh(|b_{<1,2>}|)b_{<1,2>}^{~})
whenever complex scalar
|b_{<1,2>}|ºÖ(
(a_{1}+ib_{1})^{2}+
(a_{2}+ib_{2})^{2}+
(a_{3}+ib_{3})^{2}) is nonzero.
We also then have the useful biquaternian formula
e^{b} =
½(e^{(b<0,3>+|b<1,2>|)}(1+b_{<1,2>}^{~})
+ e^{(b<0,3>-|b<1,2>|)}(1-b_{<1,2>}^{~})) ; and
when b_{<1,2>}^{2}=0 we have
e^{b} = e^{b<0,3>}(1+b_{<1,2>}) .
Baker-Campbell-Hausdorf formulae
When a and b do not commute, evaluating (a+b)^{↑} is more complicated.
Recalling the k-fold commutator a^{×k}b º
a×(a×(....×(a×b))...)
= 2^{-k}å_{i=0}^{k}
(-1)^{k} ^{k}C_{i}
a^{k-i}ba^{i} ,
we have
a geometric version of the Baker-Campbell-Hausdorf formula
(a^{↑})b =
(å_{k=0}^{¥} 2^{k}(k!)^{-1} a^{×k}b) a^{↑}
[ The 2^{k} factors arise from the ½ in our defintion of × ]
If a×b commutes with both a and b then
a^{↑}b^{↑}
= (a+b)^{↑} (a×b)^{↑}
= (a×b)^{↑} (a+b)^{↑}
giving the BCH exponent commuting rule
b^{↑}a^{↑} = a^{↑}b^{↑} (2b×a)^{↑} .
However, a×b is more likely to anticommute with a and b, as it does whenever
a^{2} commutes with b and b^{2} commutes with a.
Exponentiating Products
If a^{2} commutes with b and b^{2} commutes with a then
a×b commutes with a~b and so
since ab=
a×b + a~b we have
(ab)^{↑} = (a×b)^{↑} (a~b)^{↑} .
If blades a and b are are both negated or both preserved by reversing conjugation ^{^}
we have
(a×b)^{^} =
-(a×b) while
(a~b)^{^} =
(a~b) .
Conversely, if one is preserved and the other is negated by ^{^} we have
(a×b)^{^} =
(a×b) while
(a~b)^{^} =
-(a~b) .
Either way we obtain
(ab)^{↑} = (ab)_{<^->}^{↑} (ab)_{<^+>}^{↑}
More generally, when faced with a^{↑} for mixed multivector a we can check
for a_{<^+>}×a_{<^->} nontrivially vanishing
for ^{^}=^{§} or ^{§}^{#} and if so, reduce the exponention to the prodct of two sparser exponentiations, decomposing a into
<0;1;4;5;8;9;..> and <2;3;6;7;10;11;...> components for ^{§} or
<0;3;4;7;8;..> and <1;2;5;6;9;10;..> components for ^{§}^{#}.
Since for odd N the <0;N> component of a is bound to commute with everything else in a, then for N£7
the ^{§}^{#} commutation check reduces to
a_{<3;4>}×a_{<1;2;5;6>}=0 while for N£5 the ^{§}
commutation check reduces to
a_{<1;4>}×a_{<2;3>}=0
and the ^{§}^{#} to
a_{<3;4>}×a_{<1;5>}=0
.
Though we are of course free to decompose a into any two nonzero components (into odd and even parts, say)
and check them for commutativity, splitting as _{<§±>} or _{<§#±>}
is guaranteed to produce commuting or vanishing components when a is the product of two blades.
Exponentiating Idempotents
If b^{2} = lb for central l then
(ab)^{↑} = 1 + l^{-1}((la)^{↑}-1)b
and so when b^{2}=±b we have
(ab)^{↑} = 1 ± ((±a)^{↑}-1)b .
Inparticular, setting b=½(1±u) with u^{2}=1
so that b^{2}=b, we have
(a½(1±u))^{↑} =
1 + (a^{↑}-1)½(1±u)
Exponentiating Annihilators
If ab=ba=0 then (a+b)^{↑} =
a^{↑}b^{↑} = a^{↑} + b^{↑} - 1 .
In particular, if u^{2}=1 so that (1+u)(1-u)=0
and c,d and u commute with eachother we have
(c½(1+u) + d½(1-u))^{↑}
= c^{↑}½(1+u) + d^{↑}½(1+u)
[ Proof :
Exploiting (c½(1±u))^{↑} =
1 + (c^{↑}-1)½(1±u) we have
(c½(1+u))^{↑} + (d½(1-u))^{↑} - 1
= 1 + (c^{↑}-1)½(1+u)
+ 1 + (d^{↑}-1)½(1-u) - 1
.]
Exterior Exponential
We also have the exterior exponential or outter exponential
e^{Ùa} º å_{i=0}^{¥} a^{Ùi} (i !)^{-1}
where a^{Ùi} º aÙaÙ....Ùa , there being i
terms in the outter product. For any pure (non-scalar) blade a^{Ùi} = 0 for i>1
and for mixed multivectors having zero scalar part, e^{Ùa} is a strictly finite summation having at most N terms
. For bivector
a decomposing as a sum of j 2-blades
a =
a_{1}+a_{2}+..+a_{j}
we have
e^{Ùa} = (1+a_{1})(1+a_{2})...(1+a_{j})
which is invertible if a_{i} ¹ 1 for i=1,2,..j.
Logarithm
The geometric logarithm of a multivector b is a multivector ln(b) º b^{↓} satisfying
(b^{↓})^{↑} = b , when such exists. It is undefined when such does not exist.
Geometric logarithms are not uniquely defined unless we specify a minimisation
of some kind to provide a principle logarithm, and even then an ambiguity may persist.
For any multivector i with i^{2}=-1 that commutes with b we have alternative logarithms
b^{↓} ±2kpi for any intger k, each of which will generate a different set of "fractional powers" of b.
More generally (a + 2kpb)^{↑} = a^{↑} " kÎZ provided
b commutes with a and has b^{2}=-1 .
Thus we must always think of having a logarithm for b rather than
the logarithm of b.
If we have a logarithm of a multivector then we can easily
compute arbitary powers of it (including its unique inverse and particular square roots) via
a^{a} = (a(a^{↓}))^{↑} , since ^{↑} is intrinsically computable
via its convergent summation. We have
ln(ab) º (ab)^{↓} = a^{↓} + b^{↓} for any positive scalar a
but (ab)^{↓} = a^{↓} + b^{↓} holds in general only for commuting a and b.
For negative a we require a unit minussquare i commuting with b with which to define
(-b)^{↓} = pi + b^{↓} .
(a+a)^{↓} = cosh^{-1}(a(a^{2}+a^{2})^{-1})a^{~} if a^{2}>0 .
If u^{2}=-1 then (±u)^{↓} = ±½pu.
If u^{2}=1 then
(a½(1+u) + b½(1-u))^{↑} =
a^{↑}½(1+u) + b^{↑}½(1-u)
provides
(a½(1+u) + b½(1-u))^{↓}
= a^{↓}½(1+u) + b^{↓}½(1-u)
= ½((ab)^{↓} + (ab^{-1})^{↓}u)
provided i commutes with u and has i^{2}=-1.
In particular,
u^{↓}
= ½pi(1-u)
and
(ip½(1±u))^{↑}
= -/+ u provide logs for plussquare unit blades u^{2}=1 .
[ Proof :
u^{↓}
= (½(1+u)-½(1-u))^{↓}
= 1^{↓}½(1+u) + (-1)^{↓}½(1-u) =
ip½(1-u) . Also
(ip½(1±u))^{↑}
= i(±i½pu)^{↑}
= i(±iu) = -/+ u.
.]
More generally
(l + mu)^{↓} =
(½(l+m)(1+u) + ½(l-m)(1-u))^{↓}
= ½((l^{2}-m^{2})^{↓} + ((l+m)(l-m)^{-1})^{↓} u) .
If v^{2}=-1 then (±½pv)^{↑} = ±v gives us
(lv)^{↓} = l^{↓} + ½pv.
If s^{2}=0 then s has no logarithm.
If a^{2} = a^{2} for central a then
(la)^{↑}
= cosh(al) + a^{-1} sinh(al)a
provides
(±½pa^{-1}a)^{↑}
= a^{-1} sinh(½p)a
= ((a^{-1} sinh(½p))^{↓})^{↑}a
yielding two principle a logarithms
±½pa^{-1}a - (a^{-1} sinh(½p))^{↓} .
When a=i this is
a^{↓} =
-/+½pia + (i sinh(½p))^{↓}
= -/+½pia + ½pi + ( sinh(½p))_lognj .
Real constant ( sinh(½p))^{↓} = (½(½p^{↑}-(-½p)^{↑}))^{↓} » 0.833473703_{10}
= 0.D55E8858_{16} is thus important when constructing logarithms.
For odd (even) N a general (even) multivector a with invertible a=a_{<0;N>} decomposes into commuting factors a=a(1+a) where a=a_{<0;N>} and a=a^{-1}a-1 has zero <0;N> component. We have a^{↓} = a^{↓} + (1+a)^{↓} , reducing the logaritm to "complex" logarithm a^{↓} and a log of a mutivector having unit scalar and zero pseudoscalar components.
A result from complex matrix theory holds that any unitary (mat(U)^{†}=U^{-1} where _htm=^{T}^{^}) matrix
can be expressed as U=(Hi)^{↑} where H is Hermitian (H^{†}=H) so
for odd N geometric algebras for which a C_{2½(N-1)_mult2½(N-1)}
matrix representation exists and
i^{†}=-i^{†} , any multivector A with
A^{§}^{[-]}A=1 has a logarithm a=A^{↓} with
a^{*}^{§}^{[-]}=a^{*} .
In a Euclidean space ^{†}=^{§} so in Â_{3} any versor has a logarithm of grade <0;1> and in Â_{7}
any versor has a logarithm of grade <0;1;4;5>.
Hyperbolic Functions
Scalar hyperbolic functions cosh and sinh arise in geometric algebras through exponentiating plussquare blades in similar manner to
trigonometric functions sin and cos arising from the exponentiation of minussquare blades.
cosh(f) º ½(f^{↑} + (-f)^{↑}) Î [1,¥)
attains mimimal value 1 only at f=0 and is approximated by ½|f|^{↑} for large |f|
and by 1 + ½f^{2} + (4!)^{-1} f^{4} + _{O}(f^{6}) for small |f|
cosh^{-1}(m) = ln(m ± (m^{2}-1)^{½})
providing positve and negative "inverses" of a 0-symmetric function for m³1.
sinhf º ½(f^{↑} - (-f)^{↑}) Î (-¥,¥) is approximated by S_{ign}(f)½|f|^{↑} for large |f|
and by f + (1/6)f^{3} + _{O}(f^{5}) for small |f| .
sinh^{-1}(m) = ln(m + (m^{2}+1)^{½}) for -¥<m<¥ .
cosh'(f) = sinhf ; sinh'(f) = coshf ;
sinh(q±f) = sinhq coshf ± coshq sinhf ;
cosh(q±f) = coshq coshf ± sinhq sinhf .
We can generalise to multivector hyperbolic functions
cosh(a) º ½(a^{↑} + (-a)^{↑})
= å_{k=0}^{¥} (2k)!^{-1}a^{2k}
and
sinh(a) º ½(a^{↑} - (-a)^{↑})
= å_{k=0}^{¥} (2k+1)!^{-1}a^{2k+1}
which are both well definied and intrinsically computable via the summation definitions.
Central Powers
If u^{2}=1 and i commutes with u and has i^{2}=-1 then
we have
k^{2} distinct k^{th} roots of u given by
(2pik^{-1}i)^{↑}½(1+u) +
(pjk^{-1}i)^{↑}½(1-u) for i,j Î {1,..,k}
and more generally we have k^{2} k^{th} roots of
a½(1+u) + b½(1-u) provided by
a^{k-1}½(1+u) + b^{k-1}½(1-u) for any "complex" a, b
formed from 1 and i.
Thus, for example, we have four "square roots" for e_{1} in Â_{3}, specifically
±½(1+e_{1}) ±e_{23}½(1-e_{1}) ,
and many more when N is large.
In defining a "principle" k^{th} root it is natural to favour i
for i when the psuedoscalar is central and of negative signature, and to favour i=j=1 so
if i commutes with u we define four "principle square roots" of u
u^{½}
= ±(½(1+u) ± i½(1-u))
= ±½(1±i) + (1-/+i)u)
, with inverses
±½(½(1+u) -/+ i½(1-u)).
If a has a well defined principle logarithm a^{↓} we can generalise fractional powers of a to
a^{a} = (a(a^{↓}))^{↑} º e^{a ln(a)}
for any central a, and indeed a^{c} = (c(a^{↓}))^{↑} for any c commuting with
a.
Complex Numbers
Complex numbers z = x + yi = r(qi)^{↑}
with (x + yi)^{↑} = x^{↑}(yi)^{↑} and
(r(qi)^{↑})^{↓} = r^{↓} + qi are well understood.
All are loggable but zero, however if we take the principle logaritm with qÎ(-p,p]
there is a discontinuity in (z)^{↓} when crossing y=0 for negative x.
For qÎ(-p,½p) one can use
z^{↓} = r^{↓} + (q+2p)i but this merely moves the discontinuity to crossing x=0 for negative y.
It is consequently impossible to use the multivalued power z^{q}=(q(z)^{↓})^{↑} to define
z^{q} continuously over all zÎC.
1^{↓} = 0 and i^{↓} = ½pi provide i^{½} = (¼pi)^{↑}= 2^{-½}(1+i) and i^{i} = (i(i^{↓}))^{↑} = (-½p)^{↑} .
We can compute the square root of a complex number x+iy either
via the polar form as ±r^{½}(½qi)^{↑} or as
(x+iy)^{½} =
±2^{-½}((r+x)^{½} + i(r-x)^{½}) when y³0 ;
and ±2^{-½}((r+x)^{½} - i(r-x)^{½}) when y<0 ;
hence r=(x^{2}+y^{2})^{½} must be calculated along with two other real squareroots,
but q(x,y) is unnecessary and the result is cartesian rather than polar.
Hyperbolic Numbers
Before discussing computation of multivector logarithms, we must generalise "complex numbers" x+yi to allow for i^{2}=1 and i^{2}=0.
If i^{2}=1 we denote i by h and have hyperbolic numbers ,isomorphic to the Â_{1,1 +} algebra
generated by 1 and e_{-}Ùe_{+}. These are less well studied that complex numbers and rarely implimented in standard libraries.
The hyperbolic number
x+yh has inverse (x^{2}-y^{2})^{-1}(x-yh) provided x ¹ ±y
but unlike the complex imaginary i, which has i^{↓} = ½pi, the hyperbolic imaginary h has no
pure hyperbolic logarithm. We require an i commuting with h with i^{2}=-1 in order to form
h^{↓} = ip½(1-h) ;
h^{½} = ½(1+h) + i½^{½}(1-h)
; and
h^{-½} = ½(½(1+h) - i½(1-h)) .
If pseudoscalar i has i^{2}=-1 and commutes with h then i=i is possible, but we normally
favour an i that (anti)commutes with blades spanned by h in like manner to h,
so the dual i=hi^{-1} is preferable.
Similarly -1 has no hyperbolic log.
More generally, any hyperbolic number z in the (1)-quadrant { (x,y) : x³0 ; |x| > |y| } has a hyperbolic
logarithm z^{↓} , but -z (in the (-1)-quadrant) and ±hz
(in the (h)- and (-h)-quadrants) do not.
We must rely on
(-z)^{↓} = pi + z^{↓} ;
(hz)^{↓} = ip½(1-h) + z^{↓} ; and
(-hz)^{↓} = ip½(2-h) + z^{↓}
to provide logarithms over the entire "hyperbolic plane" except along the noninvertible
|x|=|y| "diagonals".
Again i=hi^{-1} or i=hi are the natural choices
if i^{2}=-1. If N is even and h is odd a suitable i may not exist.
Nullic Numbers
We form nullic numbers with a null imaginary n with n^{2}=0.
Exponentials and logs are trivial since (x + yn)^{↑} = x^{↑}(1+yn) provides
(x + yn)^{↓} = x^{↓} + x^{-1}yn for x>0
while (x+yn)^{-1} = x^{-2}(x-yn) for x¹0.
(x+yn)^{½} = x^{½}(1 + ½x^{-1}yn) for x>0 and
i|x|^{½}(1 + ½x^{-1}yn) for x<0 .
Bi-imaginary numbers
The introduction of an i commuting with h moves us into the imaginary hyperbolic algebra generated by
{1,h,i} @ {1,e_{+}Ùe_{-},e_{1}Ùe_{+}Ùe_{-}} within Â_{1,2} @ Â_{0,1}^{%}
and enables logs to be taken of all hyperbolic numbers apart from
the nonivertibles on the x=±y quadrant boundaries.
If i^{2}=-1 we can decompose
a = a + bb where a is a hyperbolic number and b is a complex hyperbolic number
involving commuting elements 1, h, i, and hi .
We can take i to be i,
hi, or hi^{-1} according to choice. If we take
i=i then (-1)^{½} has mixed grade <0;k;N>. If we take i=
hi^{-1} then
(-1)^{½}
has grade <0;k,N-k>.
Since 1, h, and i commute we have (a+b)^{↑} = a^{↑}b^{↑} and
(l+xh+yi+zhi)^{↑} =
l^{↑} (xh)^{↑} (yi)^{↑} (zhi)^{↑} provides a factored exponential
but logarithm (l+xh+yi+zhi)^{↓}
= (l+xh+yi+zhi)^{↓}
= (a+bh)^{↓}
= (a(1+ba^{-1}h))^{↓}
= a^{↓} + (1+ba^{-1}h)^{↓}
is more problematic.
Setting a =a_{1}+a_{2}i ; b = a_{3}+a_{4}i we have
a = a_{1}+a_{2}h+a_{3}i + a_{4}hi = a + bh
= g½(1+h) + d½(1-h)
where g=a+b=a_{1}+a_{3}+(a_{2}+a_{4})i ; d = a-b = a_{1}-a_{3}+(a_{2}-a_{4})i
provides the most natural decomposition of a bi-imaginary number into two complex numbers
weightings for annihilating idempotents ½(1±h). We then have
a^{↓} = g^{↓}½(1+h) + d^{↓}½(1-h)
a^{l} = g^{l}½(1+h) + d^{l}½(1-h)
and ab = ag½(1+h) + bd½(1-h) .
We can also form bi-imaginaries with i and n. These lack an idempotised formulation and are best represented as
a + bn for complex a,b Î A_{lgebra}(i)
Computing Exponentials and Logarithms
We have seen that if u^{2}=1 so that 1±u annihilate eachother, then
provided c and d commute with u and eachother we have
(c½(1+u) + d½(1-u))^{↑}
= c^{↑}½(1+u) + b^{↑}½(1-u) .
More generally, if we can express a multivector as a "weighted" sum of K idempotents that annihilate eachother
then, provided all the weights commute with the idempotents and eachother, we can reduce an exponential or logarithm
computation to the seperate exponentials or logs of the K weightings. We will here refer to such a decomposition into
real-weighted mutually annihilating idempotents as an K-idempotised formulation
and a complex-weighted sum of K mutually annihilating idempotents as a K-idempotised complex formulation.
We can thus compute exponentions and logs of such forms without resorting to multivector summations. We have "relegated" all the "summationness"
into the exponentiations and logs of complex or hyperbolic or nullic or bi-imaginary weightings. These exponents and logs may well
themselves involve summations to evaluate (or approximate), but are of a simpler kind and, in the complex case,
are well studied and well implimented in many math libraries.
First observe that we can decompose a into <0;N> and remaining grades as a = a + b where a = a+a and b has zero <0;N> component. If b^{2} is central then, setting b = (b^{2})^{½} (which may require an i) , we have 2-idempotised formulation a = a + bb^{~} = (a+b)½(1+b^{~}) + (a-b)½(1-b^{~}) with b^{~}^{2} = 1.
More generally, if we can decompose a as a = a_{1} + a_{2}A +
b where b^{2} = (b_{1}+b_{2}A)^{2} for real
a_{1},a_{2}, b_{1}, b_{2} and some unit or null multivector A
that commutes with a then we have
a = a + bb^{~} where b^{~} = (b^{2})^{-½}b has
b^{~}^{2}=1 ; a is in algebra { 1, A } ; b is in algebra { 1,A,i} .
This fails when (b^{2})^{-½} does not exist, which includes when b^{2}=0,
but for a nonzero invertible b^{2} we have
a^{-1} = (a^{2}-b^{2})^{-1} (a - bb^{~})
provided a^{2} ¹ b^{2} ;
and
a^{↑} = (a+b)^{↑} ½(1+b^{~}) + (a-b)^{↑} ½(1-b^{~})
We will refer to a blade A "factored out" in this manner as a decentralisation blade for a.
Blade A is "central" in that it commutes with all the blades of interest (the blades comprising a).
If A^{2}=-1 we have no need of an additional i and have a "decomplexification" in which the geometric essence of a is distilled into two "complex" A_{lgebra}[A] numbers (ie. an unnormalised qubit) and a unit or null multivector b. We can think of a multivector of this form as being a unit or null multivector b that has "aquired some A-clutter". It has not only been rescaled and added to a scalar multiple of bi; a scalar and a multiple of A have also been added.
For non-null A we can replace minussquared A with -Ai if necessary
to esnure A^{2}=1 whereupon the
4-idempotised complex form
a = (a+b)½(1+b^{~}) + (a-b)½(1-b^{~})
is available where the a±b factors are themeslevs expressed
as complex-weighted combinations of ½(1±A).
We can regard this as the natural form for
multivectors in the commutative algebra [ 1,A,b,i ]
= [ 1,½(1±A),½(1±b),i ] .
If A^{2}=1 we thus have the 4-idempotent complex decomposition
a = (x_{1}½(1+A)+x_{2}½(1-A))½(1+b) + (x_{3}½(1+A)+x_{4}½(1-A))½(1-b)
representing a with four complex (ie. [1,i]) numbers x_{1},x_{2},x_{3},x_{4} ;
unit blades A and i; and unit multivector b. We can then compute
logs, exponentials and powers of a simply by computing the corresponding logs, exponentials, or powers of the four x_{1},x_{2},x_{3} and x_{4} complex weights;
provided only that all four complex weights are nonzero.
For null A we have the 1-idempotised nullic bi-imaginary form (a + b)½(1+b^{~}) + (a - b)½b^{~} .
We can form a logarithm for any nonzero complex number; and given a negative signature i commuting with h (or n) we can form a logarithm for any hyperbolic x+yh provided |x|¹|y| (or any nonzero nullic x+yn).
We must also address the case a = a + bb where b^{2}=0 .
It can then be easily verified that
a^{↑} = a^{↑}(1+bb) and when a^{-1} exists we have
a^{↓} = a^{↓} + a^{-1}bb so that
a^{c} = a^{c}(1+ca^{-1}bb)
for any c commuting with a. In particular
a^{-1} = a^{-2}(a - bb)
.
[ Proof :
a^{l} = (la^{↓})^{↑}
= (l(a^{↓} + a^{-1}b))^{↑}
= a^{l}(la^{-1}b)^{↑}
= a^{l}(1+la^{-1}b) and l=1 gives result
.]
This approach fails when a^{-1} does not exist, such as when a^{2}=0.
If A is null we decompose as a = a + bA and exploit (a + bA)^{↓} = a^{↓} + a^{-1}bA for a¹0.
This approach generalises to supposing a =
a_{0} + a_{1}a_{1} + a_{2}a_{2} + ... +a_{k}a_{k}
where a_{1},a_{2},..a_{k} are k commuting (rather than the stricter mutually annihilating) multivectors with a_{j}^{2}=1
and the a_{j} are either real or complex with an i commuting with the a_{j}.
a =
(a_{0} + a_{k} + a_{1}a_{1} + .. +a_{k-1}a_{k-1})
½(1+a_{k})
+ (a_{0} - a_{k} + a_{1}a_{1} + .. +a_{k-1}a_{k-1}) )
½(1-a_{k})
=
(a_{0} + a_{k} + a_{k-1} + a_{1}a_{1} + .. +a_{k-2}a_{k-2})½(1+a_{k-1})
(a_{0} + a_{k} - a_{k-1} + a_{1}a_{1} + .. +a_{k-2}a_{k-2})½(1-a_{k-1}))
½(1+a_{k})
+
(a_{0} - a_{k} + a_{k-1} + a_{1}a_{1} + .. +a_{k-2}a_{k-2})½(1+a_{k-1})
(a_{0} - a_{k} - a_{k-1} + a_{1}a_{1} + .. +a_{k-2}a_{k-2})½(1-a_{k-1}))
½(1-a_{k})
=
å_{i=0}^{2k-1} b_{i}b_{i}
where the b_{i} are the 2^{k} mutually annihilating idempotents
2^{-k}(1±a_{1})(1±a_{2})...(1±a_{k})
and the 2^{k} complex b_{i} = a_{0} ± a_{1} ± ...
± a_{k} with the ± associated with a_{j} and a_{j} being
determined by the j^{th} binary bit of the ennumerator i.
We then have
a^{↓} = å_{i=0}^{2k-1} b_{i}^{↓}b_{i}
and
a^{↑} = å_{i=0}^{2k-1} b_{i}^{↑}b_{i}
and have reduced the geometric logarithm to 2^{k} seperate complex logarithms.
The condition of logability and invertibility is that all 2^{k} complex weights are nonzero .
More generally still, we assume a given multivector exists in the algebra generated by k+l+1 commuting blades one of which (i) has negative unit square, k has positive unit square, with the remaining l null. We can represent such a multivector with 2^{k} nullic (l+1)-imaginary numbers, each of which comprises 2^{l} complex numbers, for the 2^{k} idempotents ½(1±b_{1})½(1±b_{2})... ½(1±b_{k}) where the b_{i} are the k plussquare blades. This is representable with k+l+1 blades (including i) and 2^{k+1+1} real values divided into 2^{k} seperately exponentiated and logged nullic (l+1)-imginaries of the form q = a + å_{i=1}^{l} b_{i}n_{i} å_{i.j=1}^{l} b_{i.j}n_{i}n_{j} + .... for commuting null n_{i} .
Example: (a_{1}a_{2})^{↓}
Suppose, for example, that we require product logarithm (a_{1}a_{2})^{↓} where a_{1} and a_{2} are noncommuting k-blades.
The delta product a_{1}Da_{2} is a blade of
even grade d £ 2k that commutes with a_{1}a_{2}. It is thus natural to set A = (a_{1}Da_{2})^{~}
and seperate a_{1}a_{2} into a "complex" <0;d> component and a <2;4;..;d-2> component and see whether the latter has a
<0;d> grade square. If it does, then (a_{1}Da_{2})^{~} provides a decentralisation blade
for a_{1}a_{2}.
For a more general a not the product of two known blades, it is sometimes worth seperating out the maximal grade and checking (by squaring it) whether it is a pure
blade, and then checking for commutation with a.
This is discussed more fully in Transforming k-blades.
Example : (l(w+e_{¥}d))^{↓}
As a second example, consider the spinor (l(w+e_{¥}d))^{↑} for 2-blade w with
w^{2}=-1 and nonunit 1-vector d both in U_{N} and orthogonal null GHC extendor e_{¥}
orthogonal to d and w . In the GHC embedding described later
this represents a simultaneous rotation in 2-plane w by angle 2l and translation by 2ld.
For ^_{w}(d) ¹ 0, the obvious decentraliser is null 4-blade e_{¥}(dÙb)
and if ^_{w}(d)^{2}>0 we can use e_{¥0}(dÙw) as an i.
Thus
(e_{12} + de_{¥3} + fe_{¥1})
= e_{¥0}e_{123}(1 - de_{¥}e_{123})(e_{¥0}e_{3} - fe_{¥}e_{23})
= i(1-dA)B
= (i+de_{¥})B
= i(1-dA) ½(1+B) - i(1-dA) ½(1-B)
where i=e_{¥0}e_{123}
; A = e_{¥}e_{123} (or e_{¥}) ; and
B = e_{¥03} - fe_{¥}e_{23}
= e_{¥}Ù(e_{0}-fe_{2})Ùe_{3}
all commmute and have squares -1,0, and +1 respectively. More generally, these are
A = e_{¥}Ù(dÙw)^{~} (or it's (N-2)-blade dual if prefered);
B = w^{*} + (e_{¥}¯_{w}(d))^{*} for duality in i=e_{¥0}e_{123}.
We deduce
(l(e_{12} + de_{¥3} + fe_{¥1}))^{↑}
= cos(l) + sin(l)lde_{¥}e_{123}
+ cos(l)lde_{¥3}
+ sin(l)(e_{12} + fe_{¥1})
(l(e_{12} + de_{¥3} + fe_{¥1}))^{↓}
= l^{↓} - de_{¥}e_{123} + ½p(e_{12} - fe_{¥1})
expressible as 1-idempotised forms with nullic bi-imaginary
A_{lgebra}{e_{¥}, e_{¥0}e_{123}} wieghts
(±li)^{↑}(1±lde_{¥})
and
½l^{↓} ± ½pi-de_{¥}e_{123}
respectively for idempotents ½(1±(e_{¥03}-fe_{¥23})) where i=e_{¥0}e_{123}.
[ Proof :
(l(e_{12}+de_{¥3}+fe_{¥1}))^{↑}
= (li(1-dA))^{↑} ½(1+b) +(-li(1-dA))^{↑} ½(1-b)
= (li)^{↑}(1-ldiA))½(1+b) +
((-li)^{↑}(1+ldiA))½(1-b)
= cos(l) + sin(l)ldA
- cos(l)ldiAB
+ sin(l)iB
= cos(l) + sin(l)lde_{¥}e_{123}
- cos(l)ldie_{¥12}
+ sin(l)(e_{12} + fe_{¥1})
(e_{12}+de_{¥3}+fe_{¥1})^{↓}
= (i(1-dA))^{↓}½(1+B) +(-i(1-dA))^{↓}½(1-B)
= (i^{↓}-i^{-1}idA)½(1+B)
+((-i)^{↓}-i^{-1}idA)½(1-B)
= (½pi-dA)½(1+B)
+(-½pi-dA)½(1-B)
= -dA + ½piB
= -de_{¥}e_{123}
+ ½pe_{¥0}e_{123}e_{¥0}e_{3}
+ ½pe_{¥0}e_{123}e_{¥}e_{23}
exploiting AB = e_{¥12} ; iAB = -e_{¥3}
; iA = e_{¥0}e_{123}e_{¥}e_{123} = -e_{¥}
; iB
= e_{¥0}e_{123}(e_{¥03} - fe_{¥}e_{23})
= e_{12} + fe_{¥1}
.]
To allow for negative signatured ^_{w}(d) we move to Â_{4,1}^{%} and set i=e_{¥0}e_{12345} with i^{§}=-i. We then have A = e_{¥}Ù(dÙw)^{~} as before but B becomes 5-vector B = e_{¥0}e_{345} - fe_{¥}e_{2345} = w^{*} + (e_{¥}¯_{w}(d))^{*} (dualing in e_{¥0}e_{12345}).
To recover the spinor form (l(w+e_{¥}d))^{↑} from an extended basis coordinate form a
we can exploit
a_{<0>} = cos(l)
e^{¥}¿(e_{¥}a) = cos(l) + ^_{e¥0}(a_{<2>}) = cos(l) + sin(l)w
a_{<4>} = sin(l)le_{¥}ÙdÙw
Þ (e^{¥}w)¿(a_{<4>}) = sin(l)l^_{w}(d)
e^{¥}¿a = cos(l)l^_{w}(d) + sin(l)¯_{w}(d)
where e^{¥}=-e_{0}.
Example : (l(w+e_{¥}d + ge_{¥0}))^{↓}
Null 1-vector e_{¥} and unit 2-blade e_{¥0} orthogonal to w and d are as described in the
Generalised Homogenous Coordinates
section. This corresponds to simultaneous rotation by angle 2l in 2-plane w, translation by 2ld and dilation by factor
(2lg)^{↑}.
e_{12}+e_{¥}(de_{3}+fe_{1}) + ge_{¥0}
= (g+Ai)B
where unit A =
e_{3} + dg^{-1}e_{¥} (so Ai=e_{¥0}e_{12}+dg^{-1}e_{¥}e_{123}) and
B = e_{¥}Ùb
= e_{¥}Ù(e_{0}+b)
commute, providing idempotised form
(g+i)½(1+A)½(1+B)
+ (g-i)½(1-A)½(1+B)
- (g+i)½(1+A)½(1-B)
- (g-i)½(1-A)½(1-B)
; so long as we set
b
= f(1+g^{2})^{-1}(ge_{1}-e_{2}) + dg^{-1}e_{3}
= f(ge_{1}-e_{2})^{~} + dg^{-1}e_{3}
with b^{2}=f^{2}+d^{2}g^{-2} .
Hence (l(e_{12}+e_{¥}(de_{3}+fe_{1}) + ge_{¥0})^{↓}
= l^{↓} + (1+g^{2})^{↓}
+ (½p + tan^{-1}(g^{-1})A - ½pB)i
[ Proof : l^{↓} + (g+iA)^{↓} + B^{↓}
= l^{↓} + (1+g^{2})^{↓} + tan^{-1}(g^{-1})iA + i½p(1-B) .
Expression not numerically verfied.
.]
(l(e_{12}+fe_{1}+de_{3}+ge_{¥0}))^{↑} = cosh(lg) cos(l) + sinh(lg) sin(l)Ai + sinh(lg) cos(l)B + ((lg)^{↑} sin(l) + (-lg)^{↑} cos(l))ABi
With AB = e_{¥}Ù(e_{0}+b)Ùe_{3} ; ABi = e_{¥}Ùb' + e_{12} where b' = f(e_{1} + ge_{2})^{~} = ¯_{w}(b)w .
For e_{12}+e_{¥}(de_{4}+fe_{1}) + ge_{¥0} and i = e_{¥0}e_{12345} we have 3-blade A = e_{345} + dg^{-1}e_{¥}e_{35} (so Ai= e_{¥0}e_{12} + dg^{-1}e_{¥}e_{124}) and 2-blade B = e_{¥}Ùb where b=f(ge_{1} - e_{2})^{~} + dg^{-1}e_{4}. so that AB = e_{¥}ÙbÙe_{345} and ABi remains the same.
For the general l(w+e_{¥}d+ge_{¥0}) with w^{2}=-1 we have
Ai = e_{¥0}w + g^{-1}e_{¥}(wÙd)
= (e_{¥0} + g^{-1}e_{¥}^_{w}(d))w
;
B = e_{¥}Ù(e_{0}+b) where
b = (1+g^{2})^{-1}(g+w)¯_{w}(d) + g^{-1}^_{w}(d) ;
ABi = (1 + e_{¥}¯_{w}(b))w .
Logarithm of bivector exponentiation
A logarihm b of a given bivector exponentiation
a=b^{↑} can be recovered via the
canonical decomposition
into the sum of at most ½N commuting 2-blades
of a_{<2>}.
Consider for example the logarithm of the <0;2;4>-vector exponentitiation a=b^{↑} of
Â_{4,1} 2-vector b = lb_{1} + mb_{2} where b_{1} and b_{2} are commuting null or unit 2-blades so that
(b^{2})_{<4>} = bÙb = 2lmb_{1}Ùb_{2} = 2lmb_{1}b_{2} .
Then
b^{↑} = (lb_{1} + mb_{2})^{↑}
= (lb_{1})^{↑} (mb_{2})^{↑}
= (co_{b1}(l) + si_{b1}(l)b_{1})(co_{b2}(m) + si_{b2}(m)b_{2})
= co_{b1}(l) co_{b2}(m) + si_{b1}(l) co_{b2}(m) b_{1} + co_{b1}(l) si_{b2}(m) b_{2} + si_{b1}(l) si_{b2}(m) b_{1}Ùb_{2}
where
co_{b}(x) º cos(x) if b^{2}=-1 ; cosh(x) if
b^{2}=1 ; 1 if b^{2}=0 ;
si_{b}(x) º sin(x) if b^{2}=-1 ; sinh(x) if
b^{2}=1 ; x if b^{2}=0 .
ta_{b}(x) º tan(x) if b^{2}=-1 ; tanh(x) if
b^{2}=1 ; x if b^{2}=0 are the usual trigonometric or hyperbolic functions according as the sign of b^{2}.
For unit b_{1} and b_{2} we have four simultaneous scalar equations for l and m:
co_{b1}(l) co_{b2}(m) = a_{<0>}
si_{b1}(l) co_{b2}(m) = b_{1}^{-2} b_{1}¿a
co_{b1}(l) si_{b2}(m) = b_{2}^{-2} b_{2}¿a
si_{b1}(l) si_{b2}(m) = (b_{1}Ùb_{2})^{-2} (b_{1}Ùb_{2})¿a
yielding for non zero a_{<0>} solution
l = ta_{b1}^{-1}( a_{<0>}^{-1} b_{1}^{-2} b_{1}¿a ) ;
m = ta_{b1}^{-1}( a_{<0>}^{-1} b_{2}^{-2} b_{2}¿a )
with trigonometric quadrant ambiguities resolvable through the scalar and 4-blade equations.
For N>5 we have a similar approach canonically decomposing a_{<2>} into a weighted sum of
½N commuting unit or null 2-blades.b_{i}
and seeking l_{i} such that b = å_{i=1}^{½N} l_{i}b_{i}.
If a has nonzero scalar part we have ½N equations
l_{i} = ta_{bi}^{-1}(
a_{<0>}^{-1} b_{i}^{-2}b_{i}¿a )
with ambiguities resolvable through the <0;4;6;...> components of a.
Projections and Perpendiculars
Projection
We follow Dorst IPGA in defining the projection of multivector a into nonnull k-blade b by
¯_{b} a º
¯_{b}(a) º (a¿b)b^{-1}
= (a¿b)¿b^{-1}
.
The symbol ¯ used in this context should not be confused with ^{↓} used for logarithm.
[ Hestenes & Sobczyk favour
¯^{..}_{b}(a) º (a.b).b^{-1}
but this
requires specific exemptions when acting on scalars and same-grade blades
]
¯_{b}(a) = a
provides ¯_{1}(a) = 1_{*}a as an alternate left operator notation
for the scalar component a_{<0>} .
¯_{ak}(a_{k}) = (a_{k}^{2})a_{k}^{-1} = a_{k} .
¯_{b}
rejects blades "containing" b
so that, for example, ¯_{b}(i) = b_{<N>} and
¯_{b}a = a_{<0>} .
¯_{bc}(a)=¯_{c}(a) for nonzero b .
Projection into a nonnull pseudoscalar has no effect ie. ¯_{i} = 1 .
We will adopt the notation a Î b as a shorthand for ¯_{b}(a) = a
(ie. a "lies within" b) but will use it only when a_{<0>}=0 .
¯_{b} is grade preserving and an "outtermorphism" in that
¯_{b}(cÙd) = ¯_{b}(c) Ù ¯_{b}(d)
for pureblades c and d
[ Proof : For c,d,b of pure grades r, s, and t respectively with
r+s£t
¯_{b}(cÙd)b =
(cÙd)¿b = c.(d.b)
= c¿(¯_{b}(d)b)
= c¿(¯_{b}(d)¿b)
= (cÙ¯_{b}(d))¿b
= (-1)^{rs}(¯_{b}(d)Ùc)¿b
= (-1)^{rs}(¯_{b}(d)Ù¯_{b}(c))¿b
= (¯_{b}(c)Ù¯_{b}(d))¿b
.]
¯_{bÙc}(a) = ¯_{b}(a) + ¯_{c}(a)
for pureblades b and c with bÙc¹0.
¯_{b1Ù...br}(a_{1}Ù...a_{k}) = ¯_{b1Ù..br}(a_{1})
Ù¯_{b1Ù..br}(a_{2})...Ù¯_{b1Ù..br}(a_{k})
= (¯_{b1}(a_{1})+...¯_{br}(a_{1}))Ù
(¯_{b1}(a_{2})+...¯_{br}(a_{2}))Ù
(¯_{b1}(a_{k})+...¯_{br}(a_{k})) .
¯_{b}(¯_{b}(c)d) = ¯_{b}(c) ¯_{b}(d) which we
will call the projected product rule. _{[ HS 1-2.13e ]}
[ Proof : Trivial for scalar c. For 1-vector c=¯_{b}(c) so that cÙb=0
we have
¯_{b}(cd) =
¯_{b}(c¿d) + ¯_{b}(cÙd)
= ((c¿d)¿b)b^{-1} + ¯_{b}(c)Ù¯_{b}(d)
= (cÙ(d¿b))b^{-1} + cÙ¯_{b}(d)
= (cÙ(¯_{b}(d)b))b^{-1} + cÙ¯_{b}(d)
= (cÙ(¯_{b}(d)¿b)b^{-1} + cÙ¯_{b}(d)
= ((c¿¯_{b}(d))¿b)b^{-1} + cÙ¯_{b}(d)
= ((c¿¯_{b}(d))b)b^{-1} + cÙ¯_{b}(d)
= c¿¯_{b}(d) + cÙ¯_{b}(d)
= c¯_{b}(d) as required.
Result follows for general c by induction on grade
.]
¯_{b}(¯_{b}(c).d) = ¯_{b}(c).¯_{b}(d) which we will call the projected dot product rule.
¯_{b}(¯_{b}(c)×d_{2}) = ¯_{b}(c)×¯_{b}(d_{2}) which we
will call the projected bivector commutation rule.
[ Proof : Let c=¯_{b}(c) .
¯_{b}(c×d_{2})
= ¯_{b}(cd_{2} - d_{2}¿c - cÙd_{2})
= c¯_{b}(d_{2}) - ¯_{b}(d_{2})¿c - cÙ¯_{b}(d_{2})
= c×¯_{b}(d_{2})
.]
¯_{b} is self adjoint (symmetric) in that
a¿¯_{b}(c) = ¯_{b}(a)¿c
= ¯_{b}(a)¿¯_{b}(c) .
Rejection
The rejection or perpendicular of a into b is defined by means of the projection as
^_{b}(a) º a - ¯_{b}(a)
which we can write as ^_{b} = 1 - ¯_{b} .
We then have the decomposition
a = ^_{b}(a) + ¯_{b}(a) where
^_{b}(a).b = 0 and
¯_{b}(a)Ùb = 0.
For blades a,b and nonnull i we have ^_{b}(a) = ¯_{(b*)}(a) .
^_{b}(c).b Þ ^_{b}(c).d=0 for any d within b, so we have ^_{b}(c).¯_{b}(d) = 0 " c,d which provides the symmetry ¯_{b}(c).d = c.¯_{b}(d) = ¯_{b}(c).¯_{b}(d) .
For 1-vector a we have ^_{b}(a) = (aÙb)b^{-1} and ^_{b}(a) = ¯_{b*}(a) but this is not true for general a.
In Â_{p,q} we have a^{+}
= ¯_{e1..p}(a) - ^_{e1..p}(a)
= a - 2^_{e1..p}(a) for 1-vector (and scalar) a so we can regard
Modulatory conjugation ^{†} as 1-2^_{e1..p} acting on scalars and 1-vectors, extended as a reversing conjugation.
We have
^_{b}(a)b
= aÙb and
b^_{b}(a) = bÙa and so
^_{b}(a) = (aÙb)b^{-1}
= b^{-1}(bÙa)
for invertible b
.
[ Proof :
^_{b}(a)b
= ^_{b}(a).b + ^(a,b)Ùb
= ^_{b}(a)Ùb
= ^_{b}(a)Ùb + ¯_{b}(a)Ùb
= aÙb and similarly for b^_{b}(a) .
.]
Note that a_{2}×(a_{2}×b_{2}) = - ^_{a2}(b_{2}) for bivectors a_{2},b_{2}.
If k is odd then ¯_{bk}(a) commutes with b_{k}, but ^_{bk}(a) may contain both commuting and anticommuting components. If k is even then ¯_{bk}(a)_{<+>} commutes with b_{k} while ¯_{bk}(a)_{<->} anticommutes, while ^_{bk}(a) is variable.
¯_{b} and ^_{b} are idempotent (selfsquare) operators in that
¯_{b}(¯_{b}(a)) = ¯_{b}(a) " a,b
which we can express as ¯_{b}^{2} = ¯_{b} ; ^_{b}^{2} = ^_{b} .
Further ¯_{b}^_{b} = ^_{b}¯_{b} = 0 .
Projection via anticommution
For any unit multivector b with b^{2}=1 , the operator
(½(1+b))_{=}(a) º ½(1+b)a½(1+b)
annihilates all blades which anticommute with b, while sending those which commute with b
to ½(1+b)a .
When k=N-1, ^_{bk}(a_{j}) (anti)commutes oppositely to ¯_{bk}(a_{j})
with b_{k}
for odd N , and identitically
to ¯_{bk}(a_{j}) for even N.
Thus for odd N ,
¯_{bN-1}(a)_{<+>}
and ^_{bN-1}(a)_{<->} commute with b_{N-1}
while ¯_{bN-1}(a)_{<->}
and ^_{bN-1}(a)_{<+>} anticommute with b_{N-1} which we can express as
b_{N-1}¯_{bN-1}(a) = ¯_{bN-1}(a^{#})b_{N-1} ;
b_{N-1}^_{bN-1}(a) = -^_{bN-1}(a^{#})b_{N-1} .
Hence for odd N we have ¯_{bN-1}(a) = ½(a-b_{N-1}ab_{N-1}^{-1}) =
½(1-(b_{N-1})_{-1})(a)
If b_{2} is a positive signature unit 2-blade then b_{2}_{=}(a)=
¯_{b2}(a) when acting on 1-vector a.
If b^{2}=-1 we require (1+b)_{^}
where ^{^} is a conjugation with b^{^} = -b .
The subspace of multivectors that commute with b is closed under the geometric product
and so forms a geometric subalgebra. (1+b)_{=} projects general multivectors into this subalgebra.
Scaled Projections
Also of interest is the scaled projection
¯°_{b}(a) º |b| ¯_{b}(a) = |b| (a¿b)b^{-1}
which, unlike ¯_{b}, is dependant on the magnitude of b but remains linear in a.
Normalised Projections
Also of interest is the normalised projection
¯^{~}_{b}(a) º |a| ¯_{b}(a)^{~} = |a||b||a¿b|^{-1} (a¿b)b^{-1}
which preserves magnitude but is generally nonlinear in a.
Orthogonal Frames
Let (a_{1},a_{2},..,a_{k}) be a nondegenerate nonorthonormal k-frame with all a_{i} nonnull.
Let a_{i} = a_{1}Ù....Ùa_{i} .
An orthogonal frame (basis) for a_{k}
can be constructed as
b_{i} = ^(a_{i}, a_{i-1})
= a_{i-1}^{-1}(a_{i-1}Ùa_{i})
= a_{i-1}^{-1}a_{i}
= a_{i-1}^{§}a_{i}/(a_{1}^{2}...a_{i-1}^{2})
which satisfies b_{1}b_{2}..b_{k}
=
b_{1}Ùb_{2}...Ùb_{k} =
a_{k}
|a_{1}|
^{2}|a_{2}|
^{2}...!a_{k-1}|
^{2} .
Intersections and Unions
When we come to represent geometric entities like k-planes and k-spheres with blades, the multivector meet and join
operations will provide the desired geometric intersections and unions. They are thus of fundamental interest.
Different authors vary in their precise defintions of meet and join but the differences are essentially
matters of scale and sign.
Let a_{k},b_{l} be nondegenerate (possibly null) proper blades:
Join
We will here define their join a_{k}Èb_{l} as follows:
If a_{k}Ùb_{l} ¹ 0 then a_{k} È b_{l} º (a_{k}Ùb_{l})^{~}
[
Note the (unit square) normalisation.
Dorst reverses the order with a_{k} È b_{l} º (b_{l}Ùa_{k}) when nonvanishing.
]
otherwise $ a nonzero vector c : a_{k}Ùc = cÙb_{l} = 0.
Define a_{k}Èb_{l} = a_{k}'Èb_{l} where a_{k} = a_{k}'Ùc = a_{k}'c. Define b_{l}' by
b_{l}=cÙb_{l}' .
We are essentially expressing a_{k}Èb_{l} as a_{k}'ÙCÙb_{l}'
where a_{k}' has grade k'<k, b_{l}' has grade l'<l, and C is a "common" blade of grade
k+l-(k'+l') . We "factor out" the common C via succesive 1-vector extractions and are left with
a_{k}Èb_{l}
= a_{k}'ÙCÙb_{l}'
= a_{k}'Cb_{l}'
= a_{k}'Ùb_{l}
= a_{k}'b_{l}
= a_{k}Ùb_{l}' = a_{k}b_{l}' .
a_{k}Èb_{l} is thus a unit pseudoscalar spanning the minimal-grade ("smallest") subspace spanning
both a_{k} and b_{l} ; though there is an arbitarity of sign
when a_{k}Ùb_{l} = 0 . The join can be informally viewed as a "geometric OR".
This procedural definition suggests an implimentation in which "common" 1-vectors are succesively "sought then factored out"
but such should be regarded as a last resort as we can often do better.
The condition a_{k}Ùc = cÙb_{l} = 0 is independant of the signatures of a given
orthonormal basis so we can compute the join as though in a Euclidean space provided we
finally normalise with the true signatures.
By defining (nonnull) joins to be unit (square to ±1) (unorthodox in the literature)
we resolve the scale ambiguity but retain an ambiguity of sign .
From a programming perspective, it is sensible to sign the join consistantly so that, for example
a_{k}Èb_{l} = b_{l}Èa_{k} even when a_{k}Ùb_{k} ¹ 0.
This can be done by negating the computed join when necessary to
ensure that the maximal-modulus coordinate
(a_{k}Èb_{k})_{[¥]}
with regard to a given extended-basis is positive.
When a_{k}Ùb_{l} is nonzero but null, we define a_{k} È b_{l} º a_{k}Ùb_{l} without normalisation and
suffer an ambiguity of scale.
Meet
The meet of a_{k} and b_{l} spans the largest common subspace of a_{k} and b_{l} and corresponds to the
blade C formed by the outter (or equivalently geometric) product of the c's "factored out" when forming the join.
It can be viewed as a "geometric AND" and is independent of the basis signatures apart from scale.
For nonnull join, we can define the meet by
a_{k}Çb_{l} º (a_{k}(a_{k}Èb_{l})^{-1}).b_{l}
= (a_{k}¿(a_{k}Èb_{l})^{-1})¿b_{l}
.
In the particular case a_{k}Èb_{l} = i we have
a_{k} Ç b_{l} = (a_{k}^{*})¿b_{l}
[ j+k³N allowing ¿ instead of . ]
.
When a_{k}Ùb_{l} ¹ 0 (ie. when they they have no common subspace) we have
(a_{k}Çb_{l})_{<0>} = ±|a_{k}Ùb_{l}| .
This is a wonderfully useful result: rather than vanishing, the meet of distinct spaces
is a scalar measure of the minimal seperation between them. The magnitude
of the meet and join becoming particularly relevant . The scalar meet is an example
of the square of the meet providing a real scalar measure of the "minimal seperation" of two point sets.
[ Proof :
a_{k}Çb_{l} = (a_{k}(a_{k}Ùb_{l})^{~}^{-1})¿b_{l}
= ±(a_{k}(b_{l}Ùa_{k})^{~}^{-1})¿b_{l}
= ±|b_{l}Ùa_{k}| (a_{k}(b_{l}Ùa_{k})^{-1})¿b_{l}
= ±|b_{l}Ùa_{k}| (a_{k}(a_{k}^{-1}Ùb_{l}^{-1}))¿b_{l}
.]
We can alternatively define the meet of a_{k} and b_{l} with regard to a given blade j containing
a_{k}Èb_{l} as
a_{k}Çb_{l} º
(a_{k}j^{-1} ).b_{l}
= a_{k}^{*}.b_{l}
= a_{k}^{*}.(b_{l}^{*}j)
= (a_{k}^{*}Ùb_{l}^{*}).j
whence the DeMorgan rule
(a_{k}Ç_{i}b_{l})^{*} = (a_{k}^{*}) Ù (b_{l}^{*})
where ^{*} denotes dualing in j, a^{*} º aj^{-1} .
If j is either "broader" or "narrower" than the join a_{k}Èb_{l} then
a_{k}^{*} and b_{l}^{*} contain a common subblade and
a_{k}Ç_{j}b_{l} vanishes.
Ç_{j} is thus a bilinear anti-symmetric multivector product "j-dual" to Ù and
some authors consequently favour a V-like in inverted Ù notational product symbol for it
[ typographically unavailable here ] .
A rapid first attempt at computing the meet of two blades a_{k} and b_{l} is provided by
( (a_{k}j^{-1}) Ù (b_{l}j^{-1}) )j where j is a guess of the join.
If this is nonzero then it is
a_{k}Çb_{l} and we can deduce that our
j=a_{k}Èb_{l}
guess was correct . Obvious guesses for j are i when k+l³N
and (a_{k}Ùb_{l})^{~} when k+l<N , the latter case providing
the join when nonvanishing.
The meet and join are thus independant of the scales of nonzero a_{k} and b_{l} and (with our particular definitions
of È and Ç) we have
a_{k}Èb_{l}
= (b_{l}(a_{k}Çb_{l})^{-1})Ùa_{k}
= ±(a_{k}(b_{l}Ça_{k})^{-1})Ùb_{l} .
(a_{k}(a_{k}Èb_{l})^{-1})¿(b_{l}(a_{k}Çb_{l})^{-1}) = (b_{l}(a_{k}Çb_{l})^{-1})¿(a_{k}(a_{k}Èb_{l})^{-1}) = 1.
Similar but differing equations arise for variant definitions.
[ Mann et al favour
a_{k}Çb_{l} º (b_{l}(a_{k}Èb_{l})^{-1})¿a_{k}
]
(a_{k}^{*})Ç(b_{l}^{*}) = (a_{k}Db_{l})^{*} for duality in any blade i spanning a space containing a_{k} and b_{l} and in particular (a^{*})Ç(b^{*}) = (aÙb)^{*} for distinct 1-vectors a¹b.
See Multivector Programming
for the author's algorithm for simultaneously computing the meet and join of two blades.
Union
The union of K multivectors a,b,...,f is the unordered set
a È b È ... f = {a,b,...,f} . Thus, for example,
we have vector summation e_{1} + e_{2} + (e_{2}+e_{3}) = e_{1}+2e_{2}+e_{3} ;
3-blade join
e_{1} È e_{2} È (e_{2}+e_{3}) = e_{1}Ùe_{2}Ùe_{3} ; and
nonorthogonal unordered 3-frame union e_{1} È e_{2} È (e_{2}+e_{3}) =
{e_{1},e_{2},e_{2}+e_{3}}. For finite K, we can in principle specify a union
with at most K2^{N} coordinates as a 1-D array of multivectors. However,
we are more interested in infinite unions such as pointsets of particular curves or surfaces.
Disjoint
The disjoint of a_{k} and b_{l} is
a_{k}'Ùb_{l}' where a_{k}Èb_{l}
= a_{k}' Ù (a_{k}Çb_{l}) Ù b_{l}'
= a_{k}' (a_{k}Çb_{l}) b_{l}' .
Bouma et al demonstrate that for nunnull meet, the disjoint is spanned by the
delta product
a_{k}Db_{l} º (a_{k}b_{l})_{<Max>}
which acts like a "geometric XOR" . It is useful when computing the meet and join since it is directly evaluable
(although care must be taken deciding whether very small high-grade coordinates are genuine nonzero values or
errors arising from finite precision computational "noise")
and corresponds
(bar sign) to the dual of the meet in the join with a_{k}Db_{l} = (a_{k}Çb_{l})(a_{k}Èb_{l}) .
To see that the delta product is blade-valued note that a_{k}b_{l}
= (a_{k}'C)(Cb_{l}')
= C^{2}(a_{k}'b_{l}') and a_{k}'Ùb_{l}' ¹ 0 so
a_{k}Db_{l} = C^{2} a_{k}'Ùb_{l}'.
If the meet is null, ie. if blades a_{k} and b_{l} have a common null 1-vector factor, then the delta product
vanishes while the disjoint persists. By constructing our meet and join using forced Euclidean signatures
via the forced Euclidean delta product, we can deftly avoid this problem.
We will follow
Fontinje et al and define the meet via the delta product as
a_{k}Çb_{l} º
(a_{k}Db_{l})(a_{k}Èb_{l})^{-1} =
(a_{k}Db_{l})¿(a_{k}Èb_{l})^{-1} .
This provides perhaps the most efficiently computable definition of the meet when the join is known
and is equivalent to the (a_{k}(a_{k}Èb_{k})^{-1})¿b_{l} definition.
If we only require the scalar part of the meet, it may be efficiently computed as
(a_{k}Db_{l})_{*}(a_{k}Èb_{l})^{-1} .
For unit join, the square of the meet is directly evaluable as (a_{k}Çb_{l})^{2} = ± (a_{k}Db_{l})^{2}
where the sign arises from commuting (a_{k}Db_{l}) across the pseudoscalar join and the signature of the join.
Let d = <a_{k}Db_{l}>_{Grd} = <a_{k}'>_{Grd}+<b_{l}'>_{Grd}
= k+l - 2<a_{k}Çb_{l}>_{Grd} be the grade of the disjoint.
We have
m = <a_{k}Çb_{l}>_{Grd} = ½(k+l-d) ; and
j = <a_{k}Èb_{l}>_{Grd} = <a_{k}Db_{k}>_{Grd} + <a_{k}Çb_{l}>_{Grd}
= ½(k+l+d) , so the disjoint provides rapid computation of the
grades of the meet and join.
The scalar meet occurs when d=k+l , ie. when
a_{k}Db_{l} = a_{k}Ùb_{l} ¹ 0.
If k=l, then d is even and a_{k}Db_{l} commutes with 1-vectors either inside the meet or outise the join, whilke anticommuting with 1-vectors in the disjoint. Hence a_{k}Db_{l} (anti)commutes with a_{k} and b_{l} according as ½d is odd or even.
Bouma et al show that for 1-vector operands
¯_{akÇbl} =
½( ¯_{ak} - ¯_{akDbl} + ¯_{(akDbl)bl-1} )
¯_{akÈbl} =
½( ¯_{ak} + ¯_{akDbl} + ¯_{(akDbl)bl-1} )
Note that k-blade (a_{k}Db_{l})b_{l}^{-1} is not in general proportionate to a_{k}.
Consider for example a_{2}=(e_{1}-e_{3})e_{2} ; b_{1}=e_{1} ;
a_{2}Db_{1} = a_{2}Èb_{1} = e_{123} ; a_{2}Çb_{1} = 1 ;
(a_{2}Db_{1})b_{1}^{-1} = e_{23}.
We also have (for 1-vector operands)
¯_{akÇbl} = ¯_{ak} ^_{akDbl}
= ¯_{bl} ^_{akDbl}
and
^_{akÈbl}
= ^_{ak} ^_{akDbl}
= ^_{bl} ^_{akDbl} .
Plunge
Dorst et al refer to
(cj^{-1})Ù(bj^{-1})Ù(aj^{-1}) where j=aÈbÈc
as the plunge of blades a,b and c (note order reversal).
The plunge of a set of blades is thus the outter product of their duals in their join and provides
the highest-grade blade x perpendicular to each of them (ie. satisfying x¿a = x¿b = x¿c = 0).
An alternative equivalent (up to scale) definition of the plunge is
(aÇbÇc)(aÈbÈc)^{-1} ; ie. the dual of the meet in the join.
.
Null Blades
Defining the join to be unit fails when it is null as may arise in a nonEuclidean space, as does defining the meet via the inverted join.
The delta product vanishes if the meet is null which is also problematic.
However there is an elegant solution.
Because the spaces spanned by the meet and join are independant of basis signatures, we can calculate
them in Â_{p+q+r} rather than Â_{p,q,r} by forcing
Euclidean basis signatures when calculating a_{k}Db_{l} and constructing a_{k}Èb_{l},
so avoiding difficulties arising from projecting into null blades.
A case can then be made for "normalising" the join
in Â_{p,q,r} as though in Â_{p+q+r} (ie. modulatory normalisation)
even when nonnull.
This is preferable in practice to using the extended inverse which is problematic and discontinuous for nearnull
joins.
In the case when a_{k}Db_{l} = a_{k}Ùb_{l}¹0 , using the forced Euclidean inverse
of the join yields a scalar meet having magnitude the forced Euclidean seperation.
Multivectors expressed as summed commuters
Since multivector inversese are both left and right inverses, any multivector a can be split into commuting parts
a = a_{+} + a_{-} where a_{±} º ½(a ± ka^{-1}) for any k commuting with a chosen
pershaps to ensure a desirable property of the a_{±}.
Consider a_{±}a_{±}^{§} = ¼(aa^{§} ± 2k + k^{2}a^{-2}) . For even a_{±} this has grade
<0;4;8;12;...> so for N£7 we should seek k giving scalar a_{±}a_{±}^{§} .
If aa^{§} is nonsingular, we have a^{-1} = a^{§} (aa^{§}))^{-1} and the easier compuation
a_{±} º ½(a ± ka^{§} (aa^{§})^{-1} )
with
a_{±}^{2} = ¼(a^{2} ± 2k + k^{2}(a^{§})^{2} (aa^{§})^{-2})
having 4-vector component
¼a_{0}a_{4}(1 + k^{2} (aa^{§})^{-2}) .
Hence if we set k^{2} = -(aa^{§})^{2} we attain scalar a_{±}^{2} .
Bivectors expressed as sum of commuting 2-blades
Any Euclidean N-D 2-vector b can be represented as the scalar-weighted sum of at most ½N
orthogonal (by which we mean geometrically commuting and, further, vanishing contractive product) unit 2-blades.
Such a decomposition is unique except in cases where two or more distinct bladse have the same magnitude.
Thus, 2D and 3D rotations occurr in a single 2-blade, in 3D the dual of this 2-blade is the fixed axis 1-vector; 4D and 5D rotations have two planes ; 6D and 7D rotations have three 2-blade "axies"in 2D and 3D;
"Decomposing" a given bivector b in this way is nontrivial.
Hestenes & Sobczyk (3-4) provide a computational algorithm
analagous to diagonalising a skew symmetric matrix which we outline here.
Given a 2-blade b we seek m 2-blades a_{1},a_{2}..,a_{m} for m £ ½N with
a_{i}a_{j}=a_{i}Ùa_{j}=a_{j}a_{i} so that a_{i}¿a_{j}=0 and b=a_{1}+a_{2}+..+a_{m} .
Since (b^{k})_{<2k>} = bÙ...Ùb
= k!S_{r<s<..<v}a_{r}Ùa_{s}..Ùa_{v}
= k!S_{r<s<..<v}a_{r}a_{s}..a_{v}
where there are k terms in each product and k suffices r,s,..v ,
we have
(b^{k-1})_{<2k-2>} ¿
(b^{k})_{<2k>} =
k!(k-1)! å_{i=1}^{m} a_{i} (
S_{r<s<..<u ¹ i}
a_{r}^{2}a_{s}^{2}..a_{u}^{2} )
for 1£k<m ,
providing m linear ^{N}C_{2}-dimensional equations, solvable for a_{i}
(provided the scalar a_{i}^{2} are known and distinct) by conventional numerical methods.
For m=2, we compute 2 and 4-vectors C_{1} = b ; C_{2} = ½(b^{2})_{<4>} = a_{1}^a_{2} as b is known,
and then seek to solve C_{1} = a_{1}+a_{2} ; C_{2} = a_{1}^a_{2} for a_{1},a_{2} by considering characteristic scalar polynomial
(a_{1}^{2}-l)(a_{2}^{2}-l) = a_{1}^{2}a_{2}^{2} - l(a_{1}^{2}+a_{2}^{2}) + l^{2}
= C_{2}_{*}C_{2} - lC_{1}_{*}C_{1} + l^{2} to find l_{1} = a_{1}^{2} , l_{2} = a_{2}^{2} .
We then have two simultaneous bivector equations
C_{1} = a_{1}+a_{2} ; C_{1}¿C_{2} = (a_{1}+a_{2})¿(a_{1}^a_{2}) = l_{2}a_{1} + l_{1}a_{2}
with l_{1},l_{2},C_{1},C_{2} known, yielding, provided l_{1}¹l_{2},
a_{1} = (l_{2}-l_{1})^{-1}(C_{1}C_{2}-l_{1}C_{1}) and hence a_{2} = b-a_{1} .
For m>2 we compute 2k-vector C_{k} º k!^{-1}(b^{k})_{<2k>} as the known sum
a_{1}Ùa_{2}Ù...Ùa_{k} + ....
of ordered outter products of k of the m a_{i}. The scalar a_{i}^{2} are the m roots of the m^{th} order scalar polynomial
å_{k=0}^{m} (-l)^{m-k}(C_{k}_{*}C_{k})=0 and we have m linear bivector equations
C_{k-1}.C_{k} = å_{i=0}^{m} a_{i} å_{j1 < j2 < ... < jk-1 excluding i.}
l_{j1}
l_{j2}...
l_{jk-1}
Conclusion
We have described multivectors as elements of a geometric algebra, expressible as wieghted blade summations.
From a Programmers's perespective, they are generalised vectors.
While the above math may appear intimidatory, and we are just getting started, we hope to show here that multivectors
are a tremendously expressive and liberating tool, far easier used than not in geometric programming.
Still today, tragically few recognise this.
Next : Multivectors as Geometric Objects