In mathematics, the ring of polynomial functions on a vector space V over a field k gives a coordinate-free analog of a polynomial ring. It is denoted by k[''V'']. If V is finite dimensional and is viewed as an algebraic variety, then k[''V''] is precisely the coordinate ring of V.
The explicit definition of the ring can be given as follows. If
k[t1,...,tn]
ti
kn
ti(x)=xi
x=(x1,...,xn).
V*
V\tok
ti
ti
If k is infinite, then k[''V''] is the symmetric algebra of the dual space
V*
In applications, one also defines k[''V''] when V is defined over some subfield of k (e.g., k is the complex field and V is a real vector space.) The same definition still applies.
Throughout the article, for simplicity, the base field k is assumed to be infinite.
Let
A=K[x]
f
\hat{f}
\hat{f}(t)=f(t)
f\mapsto\hat{f}
p(x)=\prod\limitst(x-t)
p(t)=0
\hat{p}=0
If K is infinite then choose a polynomial f such that
\hat{f}=0
f=0
\degf=n
t0,t1,...,tn
f(ti)=0
0\lei\len
f=0
f\mapsto\hat{f}
Let k be an infinite field of characteristic zero (or at least very large) and V a finite-dimensional vector space.
Let
Sq(V)
styleλ:
q | |
\prod | |
1 |
V\tok
λ(v1,...,vq)
vi
Any λ in
Sq(V)
f(v)=λ(v,...,v).
ei,1\lei\len
ti
λ(v1,...,vq)=
n | |
\sum | |
i1,...,iq=1 |
λ(e | |
i1 |
,...,
e | |
iq |
)
t | |
i1 |
(v1) …
t | |
iq |
(vq)
Thus, there is a well-defined linear map:
\phi:Sq(V)\tok[V]q,\phi(λ)(v)=λ(v, … ,v).
f=
n | |
\sum | |
i1,...,iq=1 |
a | |
i1 … iq |
t | |
i1 |
…
t | |
iq |
a | |
i1 … iq |
i1,...,iq
\psi(f)(v1,...,vq)=
n | |
\sum | |
i1, … ,iq=1 |
a | |
i1 … iq |
t | |
i1 |
(v1) …
t | |
iq |
(vq).
\phi\circ\psi
\phi(λ)(t1v1+ … +tqvq)=λ(t1v1+ … +tqvq,...,t1v1+ … + tqvq)
Note: φ is independent of a choice of basis; so the above proof shows that ψ is also independent of a basis, the fact not a priori obvious.
Example: A bilinear functional gives rise to a quadratic form in a unique way and any quadratic form arises in this way.
See main article: Taylor series. Given a smooth function, locally, one can get a partial derivative of the function from its Taylor series expansion and, conversely, one can recover the function from the series expansion. This fact continues to hold for polynomials functions on a vector space. If f is in k[''V''], then we write: for x, y in V,
f(x+y)=
infty | |
\sum | |
n=0 |
gn(x,y)
(Pyf)(x)=g1(x,y),
Pyf(x)=\left.{d\overdt}\right|t=0f(x+ty)
\left.{f(x+ty)-f(x)\overt}\right|t=0.
2 | |
P | |
y |
f(x)=\left.{\partial\over\partialt1}\right
| | |
t1=0 |
Pyf(x+t1y)=\left.{\partial\over\partialt1}\right
| | |
t1=0 |
\left.{\partial\over\partialt2}\right
| | |
t2=0 |
f(x+(t1+t2)y)=2!g2(x,y).
\square
When the polynomials are valued not over a field k, but over some algebra, then one may define additional structure. Thus, for example, one may consider the ring of functions over GL(n,m), instead of for k = GL(1,m). In this case, one may impose an additional axiom.
The operator product algebra is an associative algebra of the form
Ai(x)Bj(y)=\sumk
ij | |
f | |
k |
(x,y,z)Ck(z)
The structure constants
ij | |
f | |
k |
(x,y,z)
Ai(x)
|x-y|
The above can be considered to be an additional requirement imposed on the ring; it is sometimes called the bootstrap. In physics, a special case of the operator product algebra is known as the operator product expansion.