In mathematics, a series is, roughly speaking, the operation of adding infinitely many quantities, one after the other, to a given starting quantity.[1] The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures (such as in combinatorics) through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance.
For a long time, the idea that such a potentially infinite summation could produce a finite result was considered paradoxical. This paradox was resolved using the concept of a limit during the 17th century. Zeno's paradox of Achilles and the tortoise illustrates this counterintuitive property of infinite sums: Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on. Zeno concluded that Achilles could never reach the tortoise, and thus that movement does not exist. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series. The resolution of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch up with the tortoise.
(a1,a2,a3,\ldots)
a_1+a_2+a_3+\cdots,
or, using the summation sign,
\sum_^\infty a_i.
The infinite sequence of additions implied by a series cannot be effectively carried on (at least in a finite amount of time). However, if the set to which the terms and their finite sums belong has a notion of limit, it is sometimes possible to assign a value to a series, called the sum of the series. This value is the limit as tends to infinity (if the limit exists) of the finite sums of the first terms of the series, which are called the th partial sums of the series. That is,
\sum_^\infty a_i = \lim_ \sum_^n a_i.
When this limit exists, one says that the series is convergent or summable, or that the sequence
The notation \sum_^\infty a_i denotes both the series—that is the implicit process of adding the terms one after the other indefinitely—and, if the series is convergent, the sum of the series—the result of the process. This is a generalization of the similar convention of denoting by
a+b
R
C
An infinite series or simply a series is an infinite sum, represented by an infinite expression of the form
a_0 + a_1 + a_2 + \cdots,
where
(an)
a0,a1,...
\sum_^ a_n .
If an abelian group of terms has a concept of limit (e.g., if it is a metric space), then some series, the convergent series, can be interpreted as having a value in, called the sum of the series. This includes the common cases from calculus, in which the group is the field of real numbers or the field of complex numbers. Given a series s=\sum_^\infty a_n, its th partial sum is
s_k = \sum_^a_n = a_0 + a_1 + \cdots + a_k.
By definition, the series \sum_^ a_n converges to the limit (or simply sums to), if the sequence of its partial sums has a limit . In this case, one usually writes
L = \sum_^a_n.
A series is said to be convergent if it converges to some limit, or divergent when it does not. The value of this limit, if it exists, is then the value of the series.
A series is said to converge or to be convergent when the sequence of partial sums has a finite limit. If the limit of is infinite or does not exist, the series is said to diverge. When the limit of partial sums exists, it is called the value (or sum) of the series
\sum_^\infty a_n = \lim_ s_k = \lim_ \sum_^k a_n.
An easy way that an infinite series can converge is if all the are zero for sufficiently large. Such a series can be identified with a finite sum, so it is only infinite in a trivial sense.
Working out the properties of the series that converge, even if infinitely many terms are nonzero, is the essence of the study of series. Consider the example
1 + \frac+ \frac+ \frac+\cdots+ \frac+\cdots.
It is possible to "visualize" its convergence on the real number line: we can imagine a line of length 2, with successive segments marked off of lengths 1, 1/2, 1/4, etc. There is always room to mark the next segment, because the amount of line remaining is always the same as the last segment marked: When we have marked off 1/2, we still have a piece of length 1/2 unmarked, so we can certainly mark the next 1/4. This argument does not prove that the sum is equal to 2 (although it is), but it does prove that it is at most 2. In other words, the series has an upper bound. Given that the series converges, proving that it is equal to 2 requires only elementary algebra. If the series is denoted, it can be seen that
S/2 = \frac = \frac+ \frac+ \frac+ \frac +\cdots.
Therefore,
S-S/2 = 1 \Rightarrow S = 2.
The idiom can be extended to other, equivalent notions of series. For instance, a recurring decimal, as in
x = 0.111\dots,
encodes the series
\sum_^\infty \frac.
Since these series always converge to real numbers (because of what is called the completeness property of the real numbers), to talk about the series in this way is the same as to talk about the numbers for which they stand. In particular, the decimal expansion 0.111... can be identified with 1/9. This leads to an argument that, which only relies on the fact that the limit laws for series preserve the arithmetic operations; for more detail on this argument, see 0.999....
In general, the geometric series
\sum_^\infty z^n
converges if and only if |z| < 1, in which case it converges to harmonic series is the series[3] 1 + + + + + \cdots = \sum_^\infty . The harmonic series is divergent. An alternating series is a series where terms alternate signs. Examples: 1 - + - + - \cdots =\sum_^\infty =\ln(2) \quad (alternating harmonic series) and -1+\frac - \frac + \frac - \frac + \cdots =\sum_^\infty \frac = -\frac A telescoping series \sum_^\infty (b_n-b_) converges if the sequence bn converges to a limit L—as n goes to infinity. The value of the series is then b1 − L. An arithmetico-geometric series is a generalization of the geometric series, which has coefficients of the common ratio equal to the terms in an arithmetic sequence. Example: 3 + + + + + \cdots=\sum_^\infty. The p-series \sum_^\infty\frac converges for p > 1 and diverges for p ≤ 1, which can be shown with the integral criterion described below in convergence tests. As a function of p, the sum of this series is Riemann's zeta function. Hypergeometric series: _rF_s \left[\begin{matrix}a_1, a_2, \dotsc, a_r \\ b_1, b_2, \dotsc, b_s \end{matrix}; z \right] := \sum_^ \frac z^n and their generalizations (such as basic hypergeometric series and elliptic hypergeometric series) frequently appear in integrable systems and mathematical physics.[4] There are some elementary series whose convergence is not yet known/proven. For example, it is unknown whether the Flint Hills series \sum_^\infty \frac converges or not. The convergence depends on how well \pi can be approximated with rational numbers (which is unknown as of yet). More specifically, the values of n with large numerical contributions to the sum are the numerators of the continued fraction convergents of \pi , a sequence beginning with 1, 3, 22, 333, 355, 103993, ... . These are integers n that are close to m\pi for some integer m, so that \sinn is close to \sinm\pi=0 and its reciprocal is large.Pi See main article: Basel problem and Leibniz formula for π. \sum_^ \frac = \frac + \frac + \frac + \frac + \cdots = \frac \sum_^\infty \frac = \frac - \frac + \frac - \frac + \frac - \frac + \frac - \cdots = \pi Natural logarithm of 2 \sum_^\infty \frac = \ln 2 \sum_^\infty \frac = \ln 2 \sum_^\infty \frac = 2\ln(2) -1 \sum_^\infty \frac = 2\ln(2) -1 \sum_^\infty \frac = \ln 2 \sum_^\infty \left(\frac+\frac\right)\frac = \ln 2 \sum_^\infty \frac = \ln 2 Natural logarithm base e See main article: e (mathematical constant). \sum_^\infty \frac = 1-\frac+\frac-\frac+\cdots = \frac \sum_^\infty \frac = \frac + \frac + \frac + \frac + \frac + \cdots = e Calculus and partial summation as an operation on sequences Partial summation takes as input a sequence, (an), and gives as output another sequence, (SN). It is thus a unary operation on sequences. Further, this function is linear, and thus is a linear operator on the vector space of sequences, denoted Σ. The inverse operator is the finite difference operator, denoted Δ. These behave as discrete analogues of integration and differentiation, only for series (functions of a natural number) instead of functions of a real variable. For example, the sequence (1, 1, 1, ...) has series (1, 2, 3, 4, ...) as its partial summation, which is analogous to the fact that \int_0^x 1\,dt = x. In computer science, it is known as prefix sum. Properties of series Series are classified not only by whether they converge or diverge, but also by the properties of the terms an (absolute or conditional convergence); type of convergence of the series (pointwise, uniform); the class of the term an (whether it is a real number, arithmetic progression, trigonometric function); etc. Non-negative terms When an is a non-negative real number for every n, the sequence SN of partial sums is non-decreasing. It follows that a series Σan with non-negative terms converges if and only if the sequence SN of partial sums is bounded. For example, the series \sum_^\infty \frac is convergent, because the inequality \frac1 \le \frac - \frac, \quad n \ge 2, and a telescopic sum argument implies that the partial sums are bounded by 2. The exact value of the original series is the Basel problem. Grouping When you group a series reordering of the series does not happen, so Riemann series theorem does not apply. A new series will have its partial sums as subsequence of original series, which means if the original series converges, so does the new series. But for divergent series that is not true, for example 1-1+1-1+... grouped every two elements will create 0+0+0+... series, which is convergent. On the other hand, divergence of the new series means the original series can be only divergent which is sometimes useful, like in Oresme proof. Absolute convergence See main article: Absolute convergence. A series \sum_^\infty a_n converges absolutely if the series of absolute values \sum_^\infty \left|a_n\right| converges. This is sufficient to guarantee not only that the original series converges to a limit, but also that any reordering of it converges to the same limit. Conditional convergence See main article: Conditional convergence. A series of real or complex numbers is said to be conditionally convergent (or semi-convergent) if it is convergent but not absolutely convergent. A famous example is the alternating series \sum\limits_^\infty = 1 - + - + - \cdots, which is convergent (and its sum is equal to ln2 ), but the series formed by taking the absolute value of each term is the divergent harmonic series. The Riemann series theorem says that any conditionally convergent series can be reordered to make a divergent series, and moreover, if the an are real and S is any real number, that one can find a reordering so that the reordered series converges with sum equal to S .Abel's test is an important tool for handling semi-convergent series. If a series has the form \sum a_n = \sum \lambda_n b_n where the partial sums Bn=b0+ … +bn are bounded, λn has bounded variation, and \limλnbn exists:\sup_N \left| \sum_^N b_n \right| < \infty, \ \ \sum \left|\lambda_ - \lambda_n\right| < \infty\ \text \ \lambda_n B_n \ \text then the series \sum a_ is convergent. This applies to the point-wise convergence of many trigonometric series, as in \sum_^\infty \frac with 0<x<2\pi . Abel's method consists in writing bn+1=Bn+1-Bn , and in performing a transformation similar to integration by parts (called summation by parts), that relates the given series \sum a_ to the absolutely convergent series \sum (\lambda_n - \lambda_) \, B_n. Evaluation of truncation errors The evaluation of truncation errors is an important procedure in numerical analysis (especially validated numerics and computer-assisted proof). Alternating series See main article: Alternating series. When conditions of the alternating series test are satisfied by S:=\sum_^\infty(-1)^m u_m, there is an exact error evaluation.[5] Set sn to be the partial sum s_n:=\sum_^n(-1)^m u_m of the given alternating series S . Then the next inequality holds:|S-s_n|\leq u_.Taylor series See main article: Taylor series. Taylor's theorem is a statement that includes the evaluation of the error term when the Taylor series is truncated. Hypergeometric series By using the ratio, we can obtain the evaluation of the error term when the hypergeometric series is truncated.[6] Matrix exponential See main article: Matrix exponential. For the matrix exponential: \exp(X) := \sum_^\infty\fracX^k,\quad X\in\mathbb^, the following error evaluation holds (scaling and squaring method):[7] [8] [9] T_(X) := \left[\sum_{j=0}^r\frac{1}{j!}(X/s)^j\right]^s,\quad \|\exp(X)-T_(X)\|\leq\frac\exp(\|X\|). Convergence tests See main article: Convergence tests. There exist many tests that can be used to determine whether particular series converge or diverge. n-th term test: If \lim_ a_n \neq 0, then the series diverges; if \lim_ a_n = 0, then the test is inconclusive. Comparison test 1 (see Direct comparison test): If \sum b_n is an absolutely convergent series such that \left\vertan\right\vert\leqC\left\vertbn\right\vert for some number C and for sufficiently large n , then \sum a_n converges absolutely as well. If \sum \left\vert b_n \right\vert diverges, and \left\vertan\right\vert\geq\left\vertbn\right\vert for all sufficiently large n , then \sum a_n also fails to converge absolutely (though it could still be conditionally convergent, for example, if the an alternate in sign).Comparison test 2 (see Limit comparison test): If \sum b_n is an absolutely convergent series such that \left\vert an+1an \right\vert\leq\left\vert bn+1bn \right\vert for sufficiently large n , then \sum a_n converges absolutely as well. If \sum \left| b_n \right| diverges, and \left\vert an+1an \right\vert\geq\left\vert bn+1bn \right\vert for all sufficiently large n , then \sum a_n also fails to converge absolutely (though it could still be conditionally convergent, for example, if the an alternate in sign).Ratio test: If there exists a constant C<1 such that \left\vert an+1an \right\vert<C for all sufficiently large n , then \sum a_ converges absolutely. When the ratio is less than 1 , but not less than a constant less than 1 , convergence is possible but this test does not establish it.Root test: If there exists a constant C<1 such that \left\vertan 1n\right\vert \leqC for all sufficiently large n , then \sum a_ converges absolutely.Integral test: if f(x) is a positive monotone decreasing function defined on the interval [1,infty) with f(n)=an for all n , then \sum a_ converges if and only if the integral \int_^ f(x) \, dx is finite.Cauchy's condensation test: If an is non-negative and non-increasing, then the two series \sum a_ and \sum 2^ a_ are of the same nature: both convergent, or both divergent.Alternating series test: A series of the form \sum (-1)^ a_ (with an>0 ) is called alternating. Such a series converges if the sequence an is monotone decreasing and converges to 0 . The converse is in general not true.For some specific types of series there are more specialized convergence tests, for instance for Fourier series there is the Dini test. Series of functions See main article: Function series. A series of real- or complex-valued functions \sum_^\infty f_n(x) converges pointwise on a set E, if the series converges for each x in E as an ordinary series of real or complex numbers. Equivalently, the partial sums s_N(x) = \sum_^N f_n(x) converge to ƒ(x) as N → ∞ for each x ∈ E. A stronger notion of convergence of a series of functions is the uniform convergence. A series converges uniformly if it converges pointwise to the function ƒ(x), and the error in approximating the limit by the Nth partial sum, |s_N(x) - f(x)| can be made minimal independently of x by choosing a sufficiently large N. Uniform convergence is desirable for a series because many properties of the terms of the series are then retained by the limit. For example, if a series of continuous functions converges uniformly, then the limit function is also continuous. Similarly, if the ƒn are integrable on a closed and bounded interval I and converge uniformly, then the series is also integrable on I and can be integrated term-by-term. Tests for uniform convergence include the Weierstrass' M-test, Abel's uniform convergence test, Dini's test, and the Cauchy criterion. More sophisticated types of convergence of a series of functions can also be defined. In measure theory, for instance, a series of functions converges almost everywhere if it converges pointwise except on a certain set of measure zero. Other modes of convergence depend on a different metric space structure on the space of functions under consideration. For instance, a series of functions converges in mean on a set E to a limit function ƒ provided \int_E \left|s_N(x)-f(x)\right|^2\,dx \to 0 as N → ∞. Power series See main article: Power series. A power series is a series of the form \sum_^\infty a_n(x-c)^n. The Taylor series at a point c of a function is a power series that, in many cases, converges to the function in a neighborhood of c. For example, the series \sum_^ \frac is the Taylor series of ex at the origin and converges to it for every x.Unless it converges only at x=c, such a series converges on a certain open disc of convergence centered at the point c in the complex plane, and may also converge at some of the points of the boundary of the disc. The radius of this disc is known as the radius of convergence, and can in principle be determined from the asymptotics of the coefficients an. The convergence is uniform on closed and bounded (that is, compact) subsets of the interior of the disc of convergence: to wit, it is uniformly convergent on compact sets. Historically, mathematicians such as Leonhard Euler operated liberally with infinite series, even if they were not convergent. When calculus was put on a sound and correct foundation in the nineteenth century, rigorous proofs of the convergence of series were always required. Formal power series See main article: Formal power series. While many uses of power series refer to their sums, it is also possible to treat power series as formal sums, meaning that no addition operations are actually performed, and the symbol "+" is an abstract symbol of conjunction which is not necessarily interpreted as corresponding to addition. In this setting, the sequence of coefficients itself is of interest, rather than the convergence of the series. Formal power series are used in combinatorics to describe and study sequences that are otherwise difficult to handle, for example, using the method of generating functions. The Hilbert–Poincaré series is a formal power series used to study graded algebras. Even if the limit of the power series is not considered, if the terms support appropriate structure then it is possible to define operations such as addition, multiplication, derivative, antiderivative for power series "formally", treating the symbol "+" as if it corresponded to addition. In the most common setting, the terms come from a commutative ring, so that the formal power series can be added term-by-term and multiplied via the Cauchy product. In this case the algebra of formal power series is the total algebra of the monoid of natural numbers over the underlying term ring.[10] If the underlying term ring is a differential algebra, then the algebra of formal power series is also a differential algebra, with differentiation performed term-by-term. Laurent series See main article: Laurent series. Laurent series generalize power series by admitting terms into the series with negative as well as positive exponents. A Laurent series is thus any series of the form \sum_^\infty a_n x^n. If such a series converges, then in general it does so in an annulus rather than a disc, and possibly some boundary points. The series converges uniformly on compact subsets of the interior of the annulus of convergence. Dirichlet series See main article: Dirichlet series. A Dirichlet series is one of the form \sum_^\infty, where s is a complex number. For example, if all an are equal to 1, then the Dirichlet series is the Riemann zeta function \zeta(s) = \sum_^\infty \frac. Like the zeta function, Dirichlet series in general play an important role in analytic number theory. Generally a Dirichlet series converges if the real part of s is greater than a number called the abscissa of convergence. In many cases, a Dirichlet series can be extended to an analytic function outside the domain of convergence by analytic continuation. For example, the Dirichlet series for the zeta function converges absolutely when Re(s) > 1, but the zeta function can be extended to a holomorphic function defined on \Complex\setminus\{1\} with a simple pole at 1.This series can be directly generalized to general Dirichlet series. Trigonometric series See main article: Trigonometric series. A series of functions in which the terms are trigonometric functions is called a trigonometric series: \frac12 A_0 + \sum_^\infty \left(A_n\cos nx + B_n \sin nx\right). The most important example of a trigonometric series is the Fourier series of a function. History of the theory of infinite series Development of infinite series Greek mathematician Archimedes produced the first known summation of an infinite series with amethod that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π.[11] [12] Mathematicians from the Kerala school were studying infinite series .[13] In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series. Convergence criteria The investigation of the validity of infinite series is considered to begin with Gauss in the 19th century. Euler had already considered the hypergeometric series 1 + \fracx + \fracx^2 + \cdots on which Gauss published a memoir in 1812. It established simpler criteria of convergence, and the questions of remainders and the range of convergence. Cauchy (1821) insisted on strict tests of convergence; he showed that if two series are convergent their product is not necessarily so, and with him begins the discovery of effective criteria. The terms convergence and divergence had been introduced long before by Gregory (1668). Leonhard Euler and Gauss had given various criteria, and Colin Maclaurin had anticipated some of Cauchy's discoveries. Cauchy advanced the theory of power series by his expansion of a complex function in such a form. Abel (1826) in his memoir on the binomial series 1 + \fracx + \fracx^2 + \cdots corrected certain of Cauchy's conclusions, and gave a completely scientific summation of the series for complex values of m and x . He showed the necessity of considering the subject of continuity in questions of convergence.Cauchy's methods led to special rather than general criteria, andthe same may be said of Raabe (1832), who made the first elaborate investigation of the subject, of De Morgan (from 1842), whoselogarithmic test DuBois-Reymond (1873) and Pringsheim (1889) haveshown to fail within a certain region; of Bertrand (1842), Bonnet(1843), Malmsten (1846, 1847, the latter without integration); Stokes (1847), Paucker (1852), Chebyshev (1852), and Arndt(1853). General criteria began with Kummer (1835), and have been studied by Eisenstein (1847), Weierstrass in his variouscontributions to the theory of functions, Dini (1867),DuBois-Reymond (1873), and many others. Pringsheim's memoirs (1889) present the most complete general theory. Uniform convergence The theory of uniform convergence was treated by Cauchy (1821), his limitations being pointed out by Abel, but the first to attack itsuccessfully were Seidel and Stokes (1847–48). Cauchy took up theproblem again (1853), acknowledging Abel's criticism, and reachingthe same conclusions which Stokes had already found. Thomae used thedoctrine (1866), but there was great delay in recognizing the importance of distinguishing between uniform and non-uniformconvergence, in spite of the demands of the theory of functions. Semi-convergence A series is said to be semi-convergent (or conditionally convergent) if it is convergent but not absolutely convergent. Semi-convergent series were studied by Poisson (1823), who also gave a general form for the remainder of the Maclaurin formula. The most important solution of the problem is due, however, to Jacobi (1834), who attacked the question of the remainder from a different standpoint and reached a different formula. This expression was also worked out, and another one given, by Malmsten (1847). Schlömilch (Zeitschrift, Vol.I, p. 192, 1856) also improved Jacobi's remainder, and showed the relation between the remainder and Bernoulli's function F(x) = 1^n + 2^n + \cdots + (x - 1)^n. Genocchi (1852) has further contributed to the theory. Among the early writers was Wronski, whose "loi suprême" (1815) was hardly recognized until Cayley (1873) brought it intoprominence. Fourier series Fourier series were being investigatedas the result of physical considerations at the same time thatGauss, Abel, and Cauchy were working out the theory of infiniteseries. Series for the expansion of sines and cosines, of multiplearcs in powers of the sine and cosine of the arc had been treated byJacob Bernoulli (1702) and his brother Johann Bernoulli (1701) and stillearlier by Vieta. Euler and Lagrange simplified the subject,as did Poinsot, Schröter, Glaisher, and Kummer. Fourier (1807) set for himself a different problem, toexpand a given function of x in terms of the sines or cosines ofmultiples of x, a problem which he embodied in his Théorie analytique de la chaleur (1822). Euler had already given the formulas for determining the coefficients in the series;Fourier was the first to assert and attempt to prove the generaltheorem. Poisson (1820–23) also attacked the problem from adifferent standpoint. Fourier did not, however, settle the questionof convergence of his series, a matter left for Cauchy (1826) toattempt and for Dirichlet (1829) to handle in a thoroughlyscientific manner (see convergence of Fourier series). Dirichlet's treatment (Crelle, 1829), of trigonometric series was the subject of criticism and improvement byRiemann (1854), Heine, Lipschitz, Schläfli, anddu Bois-Reymond. Among other prominent contributors to the theory oftrigonometric and Fourier series were Dini, Hermite, Halphen,Krause, Byerly and Appell. Generalizations Asymptotic series Asymptotic series, otherwise asymptotic expansions, are infinite series whose partial sums become good approximations in the limit of some point of the domain. In general they do not converge, but they are useful as sequences of approximations, each of which provides a value close to the desired answer for a finite number of terms. The difference is that an asymptotic series cannot be made to produce an answer as exact as desired, the way that convergent series can. In fact, after a certain number of terms, a typical asymptotic series reaches its best approximation; if more terms are included, most such series will produce worse answers. Divergent series See main article: Divergent series. Under many circumstances, it is desirable to assign a limit to a series which fails to converge in the usual sense. A summability method is such an assignment of a limit to a subset of the set of divergent series which properly extends the classical notion of convergence. Summability methods include Cesàro summation, (C,k) summation, Abel summation, and Borel summation, in increasing order of generality (and hence applicable to increasingly divergent series). A variety of general results concerning possible summability methods are known. The Silverman–Toeplitz theorem characterizes matrix summability methods, which are methods for summing a divergent series by applying an infinite matrix to the vector of coefficients. The most general method for summing a divergent series is non-constructive, and concerns Banach limits. Summations over arbitrary index sets Definitions may be given for sums over an arbitrary index set I. There are two main differences with the usual notion of series: first, there is no specific order given on the set I ; second, this set I may be uncountable. The notion of convergence needs to be strengthened, because the concept of conditional convergence depends on the ordering of the index set.If a:I\mapstoG is a function from an index set I to a set G, then the "series" associated to a is the formal sum of the elements a(x)\inG over the index elements x\inI denoted by the\sum_ a(x). When the index set is the natural numbers I=\N, the function a:\N\mapstoG is a sequence denoted by a(n)=an. A series indexed on the natural numbers is an ordered formal sum and so we rewrite \sum_ as \sum_^ in order to emphasize the ordering induced by the natural numbers. Thus, we obtain the common notation for a series indexed by the natural numbers\sum_^ a_n = a_0 + a_1 + a_2 + \cdots. Families of non-negative numbers When summing a family \left\{ai:i\inI\right\} of non-negative real numbers, define\sum_a_i = \sup \left\ \in [0, +\infty]. When the supremum is finite then the set of i\inI such that ai>0 is countable. Indeed, for every n\geq1, the cardinality \left|An\right| of the set An=\left\{i\inI:ai>1/n\right\} is finite because\frac \, \left|A_n\right| = \sum_ \frac \leq \sum_ a_i \leq \sum_ a_i < \infty. If I is countably infinite and enumerated as I=\left\{i0,i1,\ldots\right\} then the above defined sum satisfies\sum_ a_i = \sum_^ a_,provided the value infty is allowed for the sum of the series.Any sum over non-negative reals can be understood as the integral of a non-negative function with respect to the counting measure, which accounts for the many similarities between the two constructions. Abelian topological groups Let a:I\toX be a map, also denoted by \left(ai\right)i, from some non-empty set I into a Hausdorff abelian topological group X. Let \operatorname{Finite}(I) be the collection of all finite subsets of I, with \operatorname{Finite}(I) viewed as a directed set, ordered under inclusion \subseteq with union as join. The family \left(ai\right)i, is said to be if the following limit, which is denoted by \sumi\inai and is called the of \left(ai\right)i, exists in X: \sum_ a_i := \lim_ \ \sum_ a_i = \lim \left\Saying that the sum S:=\sumi\inai is the limit of finite partial sums means that for every neighborhood V of the origin in X, there exists a finite subset A0 of I such thatS - \sum_ a_i \in V \qquad \text \; A \supseteq A_0. Because \operatorname{Finite}(I) is not totally ordered, this is not a limit of a sequence of partial sums, but rather of a net.[14] [15] For every neighborhood W of the origin in X, there is a smaller neighborhood V such that V-V\subseteqW. It follows that the finite partial sums of an unconditionally summable family \left(ai\right)i, form a, that is, for every neighborhood W of the origin in X, there exists a finite subset A0 of I such that\sum_ a_i - \sum_ a_i \in W \qquad \text \; A_1, A_2 \supseteq A_0,which implies that ai\inW for every i\inI\setminusA0 (by taking A1:=A0\cup\{i\} and A2:=A0 ).When X is complete, a family \left(ai\right)i is unconditionally summable in X if and only if the finite sums satisfy the latter Cauchy net condition. When X is complete and \left(ai\right)i, is unconditionally summable in X, then for every subset J\subseteqI, the corresponding subfamily \left(aj\right)j, is also unconditionally summable in X. When the sum of a family of non-negative numbers, in the extended sense defined before, is finite, then it coincides with the sum in the topological group X=\R. If a family \left(ai\right)i in X is unconditionally summable then for every neighborhood W of the origin in X, there is a finite subset A0\subseteqI such that ai\inW for every index i not in A0. If X is a first-countable space then it follows that the set of i\inI such that ai ≠ 0 is countable. This need not be true in a general abelian topological group (see examples below).Unconditionally convergent series Suppose that I=\N. If a family an,n\in\N, is unconditionally summable in a Hausdorff abelian topological group X, then the series in the usual sense converges and has the same sum,\sum_^\infty a_n = \sum_ a_n. By nature, the definition of unconditional summability is insensitive to the order of the summation. When \suman is unconditionally summable, then the series remains convergent after any permutation \sigma:\N\to\N of the set \N of indices, with the same sum,\sum_^\infty a_ = \sum_^\infty a_n. Conversely, if every permutation of a series \suman converges, then the series is unconditionally convergent. When X is complete then unconditional convergence is also equivalent to the fact that all subseries are convergent; if X is a Banach space, this is equivalent to say that for every sequence of signs \varepsilonn=\pm1 , the series\sum_^\infty \varepsilon_n a_n converges in X. Series in topological vector spaces If X is a topological vector space (TVS) and \left(xi\right)i is a (possibly uncountable) family in X then this family is summable if the limit \limA(I)}xA of the net \left(xA\right)A(I)} exists in X, where \operatorname{Finite}(I) is the directed set of all finite subsets of I directed by inclusion \subseteq and x_A := \sum_ x_i.It is called absolutely summable if in addition, for every continuous seminorm p on X, the family \left(p\left(xi\right)\right)i is summable.If X is a normable space and if \left(xi\right)i is an absolutely summable family in X, then necessarily all but a countable collection of xi ’s are zero. Hence, in normed spaces, it is usually only ever necessary to consider series with countably many terms.Summable families play an important role in the theory of nuclear spaces. Series in Banach and seminormed spaces The notion of series can be easily extended to the case of a seminormed space. If xn is a sequence of elements of a normed space X and if x\inX then the series \sumxn converges to x in X if the sequence of partial sums of the series \left(\sum_^N x_n\right)_^ converges to x in X ; to wit,\left\|x - \sum_^N x_n\right\| \to 0 \quad \text N \to \infty. More generally, convergence of series can be defined in any abelian Hausdorff topological group. Specifically, in this case, \sumxn converges to x if the sequence of partial sums converges to x. If (X,| ⋅ |) is a seminormed space, then the notion of absolute convergence becomes: A series \sum_ x_i of vectors in X converges absolutely if \sum_ \left|x_i\right| < +\infty in which case all but at most countably many of the values \left|xi\right| are necessarily zero.If a countable series of vectors in a Banach space converges absolutely then it converges unconditionally, but the converse only holds in finite-dimensional Banach spaces (theorem of). Well-ordered sums Conditionally convergent series can be considered if I is a well-ordered set, for example, an ordinal number \alpha0. In this case, define by transfinite recursion:\sum_ a_\beta = a_ + \sum_ a_\beta and for a limit ordinal \alpha, \sum_ a_\beta = \lim_ \sum_ a_\beta if this limit exists. If all limits exist up to \alpha0, then the series converges.Examples Given a function f:X\toY into an abelian topological group Y, define for every a\inX, f_a(x)=\begin0 & x\neq a, \\f(a) & x=a, \\\end\{a\}. Thenf = \sum_f_a in the topology of pointwise convergence (that is, the sum is taken in the infinite product group YX ).In the definition of partitions of unity, one constructs sums of functions over arbitrary index set I, \sum_ \varphi_i(x) = 1.While, formally, this requires a notion of sums of uncountable series, by construction there are, for every given x, only finitely many nonzero terms in the sum, so issues regarding convergence of such sums do not arise. Actually, one usually assumes more: the family of functions is locally finite, that is, for every x there is a neighborhood of x in which all but a finite number of functions vanish. Any regularity property of the \varphii, such as continuity, differentiability, that is preserved under finite sums will be preserved for the sum of any subcollection of this family of functions.\omega1 viewed as a topological space in the order topology, the constant function f:\left[0,\omega1\right)\to\left[0,\omega1\right] given by f(\alpha)=1 satisfies \sum_Notes and References Book: Calculus Made Easy . Thompson . Silvanus . Silvanus P. Thompson . Gardner . Martin . Martin Gardner . 1998 . Macmillan . 978-0-312-18548-0 . Web site: Weisstein. Eric W.. Series. 2020-08-30. mathworld.wolfram.com. en. Web site: Infinite Series. 2020-08-30. www.mathsisfun.com. Gasper, G., Rahman, M. (2004). Basic hypergeometric series. Cambridge University Press. https://www.ck12.org/book/CK-12-Calculus-Concepts/section/9.9/ Positive and Negative Terms: Alternating Series Johansson, F. (2016). Computing hypergeometric functions rigorously. arXiv preprint arXiv:1606.06977. Higham, N. J. (2008). Functions of matrices: theory and computation. Society for Industrial and Applied Mathematics. Higham, N. J. (2009). The scaling and squaring method for the matrix exponential revisited. SIAM review, 51(4), 747-764. http://www.maths.manchester.ac.uk/~higham/talks/exp10.pdf How and How Not to Compute the Exponential of a Matrix §III.2.11. Web site: A history of calculus . O'Connor, J.J. . Robertson, E.F. . amp . University of St Andrews. February 1996. 2007-08-07. Archimedes and Pi-Revisited.. Bidwell, James. K.. 30 November 1993. School Science and Mathematics. 94. 3. Web site: Indians predated Newton 'discovery' by 250 years. manchester.ac.uk. Book: Bourbaki, Nicolas. General Topology: Chapters 1–4. Nicolas Bourbaki. 1998. Springer. 978-3-540-64241-1. 261–270. Book: Choquet, Gustave. Topology. Gustave Choquet. 1966. Academic Press. 978-0-12-173450-3. 216–231. This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Series (mathematics)". Except where otherwise indicated, Everything.Explained.Today is © Copyright 2009-2024, A B Cryer, All Rights Reserved. Cookie policy.
The harmonic series is divergent.
(alternating harmonic series) and
-1+\frac - \frac + \frac - \frac + \cdots =\sum_^\infty \frac = -\frac
converges if the sequence bn converges to a limit L—as n goes to infinity. The value of the series is then b1 − L.
converges for p > 1 and diverges for p ≤ 1, which can be shown with the integral criterion described below in convergence tests. As a function of p, the sum of this series is Riemann's zeta function.
and their generalizations (such as basic hypergeometric series and elliptic hypergeometric series) frequently appear in integrable systems and mathematical physics.[4]
\pi
m\pi
\sinn
\sinm\pi=0
See main article: Basel problem and Leibniz formula for π.
\sum_^ \frac = \frac + \frac + \frac + \frac + \cdots = \frac
\sum_^\infty \frac = \frac - \frac + \frac - \frac + \frac - \frac + \frac - \cdots = \pi
\sum_^\infty \frac = \ln 2
\sum_^\infty \frac = 2\ln(2) -1
\sum_^\infty \left(\frac+\frac\right)\frac = \ln 2
See main article: e (mathematical constant).
\sum_^\infty \frac = 1-\frac+\frac-\frac+\cdots = \frac
\sum_^\infty \frac = \frac + \frac + \frac + \frac + \frac + \cdots = e
Partial summation takes as input a sequence, (an), and gives as output another sequence, (SN). It is thus a unary operation on sequences. Further, this function is linear, and thus is a linear operator on the vector space of sequences, denoted Σ. The inverse operator is the finite difference operator, denoted Δ. These behave as discrete analogues of integration and differentiation, only for series (functions of a natural number) instead of functions of a real variable. For example, the sequence (1, 1, 1, ...) has series (1, 2, 3, 4, ...) as its partial summation, which is analogous to the fact that \int_0^x 1\,dt = x.
In computer science, it is known as prefix sum.
Series are classified not only by whether they converge or diverge, but also by the properties of the terms an (absolute or conditional convergence); type of convergence of the series (pointwise, uniform); the class of the term an (whether it is a real number, arithmetic progression, trigonometric function); etc.
When an is a non-negative real number for every n, the sequence SN of partial sums is non-decreasing. It follows that a series Σan with non-negative terms converges if and only if the sequence SN of partial sums is bounded.
For example, the series
\sum_^\infty \frac
is convergent, because the inequality
\frac1 \le \frac - \frac, \quad n \ge 2,
and a telescopic sum argument implies that the partial sums are bounded by 2. The exact value of the original series is the Basel problem.
When you group a series reordering of the series does not happen, so Riemann series theorem does not apply. A new series will have its partial sums as subsequence of original series, which means if the original series converges, so does the new series. But for divergent series that is not true, for example 1-1+1-1+... grouped every two elements will create 0+0+0+... series, which is convergent. On the other hand, divergence of the new series means the original series can be only divergent which is sometimes useful, like in Oresme proof.
See main article: Absolute convergence. A series
\sum_^\infty a_n
converges absolutely if the series of absolute values
\sum_^\infty \left|a_n\right|
converges. This is sufficient to guarantee not only that the original series converges to a limit, but also that any reordering of it converges to the same limit.
See main article: Conditional convergence. A series of real or complex numbers is said to be conditionally convergent (or semi-convergent) if it is convergent but not absolutely convergent. A famous example is the alternating series
\sum\limits_^\infty = 1 - + - + - \cdots,
which is convergent (and its sum is equal to
ln2
an
S
Abel's test is an important tool for handling semi-convergent series. If a series has the form
\sum a_n = \sum \lambda_n b_n
where the partial sums
Bn=b0+ … +bn
λn
\limλnbn
\sup_N \left| \sum_^N b_n \right| < \infty, \ \ \sum \left|\lambda_ - \lambda_n\right| < \infty\ \text \ \lambda_n B_n \ \text
then the series \sum a_ is convergent. This applies to the point-wise convergence of many trigonometric series, as in
with
0<x<2\pi
bn+1=Bn+1-Bn
\sum (\lambda_n - \lambda_) \, B_n.
The evaluation of truncation errors is an important procedure in numerical analysis (especially validated numerics and computer-assisted proof).
See main article: Alternating series. When conditions of the alternating series test are satisfied by S:=\sum_^\infty(-1)^m u_m, there is an exact error evaluation.[5] Set
sn
See main article: Taylor series. Taylor's theorem is a statement that includes the evaluation of the error term when the Taylor series is truncated.
By using the ratio, we can obtain the evaluation of the error term when the hypergeometric series is truncated.[6]
See main article: Matrix exponential. For the matrix exponential:
\exp(X) := \sum_^\infty\fracX^k,\quad X\in\mathbb^,
the following error evaluation holds (scaling and squaring method):[7] [8] [9]
T_(X) := \left[\sum_{j=0}^r\frac{1}{j!}(X/s)^j\right]^s,\quad \|\exp(X)-T_(X)\|\leq\frac\exp(\|X\|).
See main article: Convergence tests. There exist many tests that can be used to determine whether particular series converge or diverge.
\left\vertan\right\vert\leqC\left\vertbn\right\vert
n
\left\vertan\right\vert\geq\left\vertbn\right\vert
\left\vert
\right\vert\leq\left\vert
\right\vert
\right\vert\geq\left\vert
C<1
\right\vert<C
1
\left\vertan
\leqC
f(x)
[1,infty)
f(n)=an
an>0
0
See main article: Function series. A series of real- or complex-valued functions
\sum_^\infty f_n(x)
converges pointwise on a set E, if the series converges for each x in E as an ordinary series of real or complex numbers. Equivalently, the partial sums
s_N(x) = \sum_^N f_n(x)
converge to ƒ(x) as N → ∞ for each x ∈ E.
A stronger notion of convergence of a series of functions is the uniform convergence. A series converges uniformly if it converges pointwise to the function ƒ(x), and the error in approximating the limit by the Nth partial sum,
|s_N(x) - f(x)|
can be made minimal independently of x by choosing a sufficiently large N.
Uniform convergence is desirable for a series because many properties of the terms of the series are then retained by the limit. For example, if a series of continuous functions converges uniformly, then the limit function is also continuous. Similarly, if the ƒn are integrable on a closed and bounded interval I and converge uniformly, then the series is also integrable on I and can be integrated term-by-term. Tests for uniform convergence include the Weierstrass' M-test, Abel's uniform convergence test, Dini's test, and the Cauchy criterion.
More sophisticated types of convergence of a series of functions can also be defined. In measure theory, for instance, a series of functions converges almost everywhere if it converges pointwise except on a certain set of measure zero. Other modes of convergence depend on a different metric space structure on the space of functions under consideration. For instance, a series of functions converges in mean on a set E to a limit function ƒ provided
\int_E \left|s_N(x)-f(x)\right|^2\,dx \to 0
as N → ∞.
See main article: Power series.
A power series is a series of the form
\sum_^\infty a_n(x-c)^n.
The Taylor series at a point c of a function is a power series that, in many cases, converges to the function in a neighborhood of c. For example, the series
\sum_^ \frac
is the Taylor series of
ex
Unless it converges only at x=c, such a series converges on a certain open disc of convergence centered at the point c in the complex plane, and may also converge at some of the points of the boundary of the disc. The radius of this disc is known as the radius of convergence, and can in principle be determined from the asymptotics of the coefficients an. The convergence is uniform on closed and bounded (that is, compact) subsets of the interior of the disc of convergence: to wit, it is uniformly convergent on compact sets.
Historically, mathematicians such as Leonhard Euler operated liberally with infinite series, even if they were not convergent. When calculus was put on a sound and correct foundation in the nineteenth century, rigorous proofs of the convergence of series were always required.
See main article: Formal power series.
While many uses of power series refer to their sums, it is also possible to treat power series as formal sums, meaning that no addition operations are actually performed, and the symbol "+" is an abstract symbol of conjunction which is not necessarily interpreted as corresponding to addition. In this setting, the sequence of coefficients itself is of interest, rather than the convergence of the series. Formal power series are used in combinatorics to describe and study sequences that are otherwise difficult to handle, for example, using the method of generating functions. The Hilbert–Poincaré series is a formal power series used to study graded algebras.
Even if the limit of the power series is not considered, if the terms support appropriate structure then it is possible to define operations such as addition, multiplication, derivative, antiderivative for power series "formally", treating the symbol "+" as if it corresponded to addition. In the most common setting, the terms come from a commutative ring, so that the formal power series can be added term-by-term and multiplied via the Cauchy product. In this case the algebra of formal power series is the total algebra of the monoid of natural numbers over the underlying term ring.[10] If the underlying term ring is a differential algebra, then the algebra of formal power series is also a differential algebra, with differentiation performed term-by-term.
See main article: Laurent series. Laurent series generalize power series by admitting terms into the series with negative as well as positive exponents. A Laurent series is thus any series of the form
\sum_^\infty a_n x^n.
If such a series converges, then in general it does so in an annulus rather than a disc, and possibly some boundary points. The series converges uniformly on compact subsets of the interior of the annulus of convergence.
See main article: Dirichlet series.
A Dirichlet series is one of the form
\sum_^\infty,
where s is a complex number. For example, if all an are equal to 1, then the Dirichlet series is the Riemann zeta function
\zeta(s) = \sum_^\infty \frac.
Like the zeta function, Dirichlet series in general play an important role in analytic number theory. Generally a Dirichlet series converges if the real part of s is greater than a number called the abscissa of convergence. In many cases, a Dirichlet series can be extended to an analytic function outside the domain of convergence by analytic continuation. For example, the Dirichlet series for the zeta function converges absolutely when Re(s) > 1, but the zeta function can be extended to a holomorphic function defined on
\Complex\setminus\{1\}
This series can be directly generalized to general Dirichlet series.
See main article: Trigonometric series. A series of functions in which the terms are trigonometric functions is called a trigonometric series:
\frac12 A_0 + \sum_^\infty \left(A_n\cos nx + B_n \sin nx\right).
The most important example of a trigonometric series is the Fourier series of a function.
Greek mathematician Archimedes produced the first known summation of an infinite series with amethod that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π.[11] [12]
Mathematicians from the Kerala school were studying infinite series .[13]
In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series.
The investigation of the validity of infinite series is considered to begin with Gauss in the 19th century. Euler had already considered the hypergeometric series
1 + \fracx + \fracx^2 + \cdots
on which Gauss published a memoir in 1812. It established simpler criteria of convergence, and the questions of remainders and the range of convergence.
Cauchy (1821) insisted on strict tests of convergence; he showed that if two series are convergent their product is not necessarily so, and with him begins the discovery of effective criteria. The terms convergence and divergence had been introduced long before by Gregory (1668). Leonhard Euler and Gauss had given various criteria, and Colin Maclaurin had anticipated some of Cauchy's discoveries. Cauchy advanced the theory of power series by his expansion of a complex function in such a form.
Abel (1826) in his memoir on the binomial series
corrected certain of Cauchy's conclusions, and gave a completely scientific summation of the series for complex values of
m
x
Cauchy's methods led to special rather than general criteria, andthe same may be said of Raabe (1832), who made the first elaborate investigation of the subject, of De Morgan (from 1842), whoselogarithmic test DuBois-Reymond (1873) and Pringsheim (1889) haveshown to fail within a certain region; of Bertrand (1842), Bonnet(1843), Malmsten (1846, 1847, the latter without integration); Stokes (1847), Paucker (1852), Chebyshev (1852), and Arndt(1853).
General criteria began with Kummer (1835), and have been studied by Eisenstein (1847), Weierstrass in his variouscontributions to the theory of functions, Dini (1867),DuBois-Reymond (1873), and many others. Pringsheim's memoirs (1889) present the most complete general theory.
The theory of uniform convergence was treated by Cauchy (1821), his limitations being pointed out by Abel, but the first to attack itsuccessfully were Seidel and Stokes (1847–48). Cauchy took up theproblem again (1853), acknowledging Abel's criticism, and reachingthe same conclusions which Stokes had already found. Thomae used thedoctrine (1866), but there was great delay in recognizing the importance of distinguishing between uniform and non-uniformconvergence, in spite of the demands of the theory of functions.
A series is said to be semi-convergent (or conditionally convergent) if it is convergent but not absolutely convergent.
Semi-convergent series were studied by Poisson (1823), who also gave a general form for the remainder of the Maclaurin formula. The most important solution of the problem is due, however, to Jacobi (1834), who attacked the question of the remainder from a different standpoint and reached a different formula. This expression was also worked out, and another one given, by Malmsten (1847). Schlömilch (Zeitschrift, Vol.I, p. 192, 1856) also improved Jacobi's remainder, and showed the relation between the remainder and Bernoulli's function
F(x) = 1^n + 2^n + \cdots + (x - 1)^n.
Genocchi (1852) has further contributed to the theory.
Among the early writers was Wronski, whose "loi suprême" (1815) was hardly recognized until Cayley (1873) brought it intoprominence.
Fourier series were being investigatedas the result of physical considerations at the same time thatGauss, Abel, and Cauchy were working out the theory of infiniteseries. Series for the expansion of sines and cosines, of multiplearcs in powers of the sine and cosine of the arc had been treated byJacob Bernoulli (1702) and his brother Johann Bernoulli (1701) and stillearlier by Vieta. Euler and Lagrange simplified the subject,as did Poinsot, Schröter, Glaisher, and Kummer.
Fourier (1807) set for himself a different problem, toexpand a given function of x in terms of the sines or cosines ofmultiples of x, a problem which he embodied in his Théorie analytique de la chaleur (1822). Euler had already given the formulas for determining the coefficients in the series;Fourier was the first to assert and attempt to prove the generaltheorem. Poisson (1820–23) also attacked the problem from adifferent standpoint. Fourier did not, however, settle the questionof convergence of his series, a matter left for Cauchy (1826) toattempt and for Dirichlet (1829) to handle in a thoroughlyscientific manner (see convergence of Fourier series). Dirichlet's treatment (Crelle, 1829), of trigonometric series was the subject of criticism and improvement byRiemann (1854), Heine, Lipschitz, Schläfli, anddu Bois-Reymond. Among other prominent contributors to the theory oftrigonometric and Fourier series were Dini, Hermite, Halphen,Krause, Byerly and Appell.
Asymptotic series, otherwise asymptotic expansions, are infinite series whose partial sums become good approximations in the limit of some point of the domain. In general they do not converge, but they are useful as sequences of approximations, each of which provides a value close to the desired answer for a finite number of terms. The difference is that an asymptotic series cannot be made to produce an answer as exact as desired, the way that convergent series can. In fact, after a certain number of terms, a typical asymptotic series reaches its best approximation; if more terms are included, most such series will produce worse answers.
See main article: Divergent series. Under many circumstances, it is desirable to assign a limit to a series which fails to converge in the usual sense. A summability method is such an assignment of a limit to a subset of the set of divergent series which properly extends the classical notion of convergence. Summability methods include Cesàro summation, (C,k) summation, Abel summation, and Borel summation, in increasing order of generality (and hence applicable to increasingly divergent series).
A variety of general results concerning possible summability methods are known. The Silverman–Toeplitz theorem characterizes matrix summability methods, which are methods for summing a divergent series by applying an infinite matrix to the vector of coefficients. The most general method for summing a divergent series is non-constructive, and concerns Banach limits.
Definitions may be given for sums over an arbitrary index set
I.
I
If
a:I\mapstoG
G,
a
a(x)\inG
x\inI
\sum_ a(x).
When the index set is the natural numbers
I=\N,
a:\N\mapstoG
a(n)=an.
\sum_^ a_n = a_0 + a_1 + a_2 + \cdots.
When summing a family
\left\{ai:i\inI\right\}
\sum_a_i = \sup \left\ \in [0, +\infty].
When the supremum is finite then the set of
i\inI
ai>0
n\geq1,
\left|An\right|
An=\left\{i\inI:ai>1/n\right\}
\frac \, \left|A_n\right| = \sum_ \frac \leq \sum_ a_i \leq \sum_ a_i < \infty.
I=\left\{i0,i1,\ldots\right\}
\sum_ a_i = \sum_^ a_,provided the value
infty
Any sum over non-negative reals can be understood as the integral of a non-negative function with respect to the counting measure, which accounts for the many similarities between the two constructions.
Let
a:I\toX
\left(ai\right)i,
X.
\operatorname{Finite}(I)
I,
\subseteq
\sumi\inai
X:
\sum_ a_i := \lim_ \ \sum_ a_i = \lim \left\Saying that the sum
S:=\sumi\inai
V
X,
A0
S - \sum_ a_i \in V \qquad \text \; A \supseteq A_0.
Because
For every neighborhood
W
V-V\subseteqW.
\sum_ a_i - \sum_ a_i \in W \qquad \text \; A_1, A_2 \supseteq A_0,which implies that
ai\inW
i\inI\setminusA0
A1:=A0\cup\{i\}
A2:=A0
When
X
\left(ai\right)i
J\subseteqI,
\left(aj\right)j,
When the sum of a family of non-negative numbers, in the extended sense defined before, is finite, then it coincides with the sum in the topological group
X=\R.
If a family
A0\subseteqI
i
A0.
ai ≠ 0
Suppose that
I=\N.
an,n\in\N,
\sum_^\infty a_n = \sum_ a_n.
By nature, the definition of unconditional summability is insensitive to the order of the summation. When
\suman
\sigma:\N\to\N
\N
\sum_^\infty a_ = \sum_^\infty a_n.
Conversely, if every permutation of a series
\varepsilonn=\pm1
\sum_^\infty \varepsilon_n a_n
converges in
\left(xi\right)i
\limA(I)}xA
\left(xA\right)A(I)}
It is called absolutely summable if in addition, for every continuous seminorm
p
\left(p\left(xi\right)\right)i
xi
Summable families play an important role in the theory of nuclear spaces.
The notion of series can be easily extended to the case of a seminormed space. If
xn
x\inX
\sumxn
\left\|x - \sum_^N x_n\right\| \to 0 \quad \text N \to \infty.
More generally, convergence of series can be defined in any abelian Hausdorff topological group. Specifically, in this case,
x.
(X,| ⋅ |)
\sum_ \left|x_i\right| < +\infty
in which case all but at most countably many of the values
\left|xi\right|
If a countable series of vectors in a Banach space converges absolutely then it converges unconditionally, but the converse only holds in finite-dimensional Banach spaces (theorem of).
Conditionally convergent series can be considered if
\alpha0.
\sum_ a_\beta = a_ + \sum_ a_\beta
and for a limit ordinal
\alpha,
\sum_ a_\beta = \lim_ \sum_ a_\beta
if this limit exists. If all limits exist up to
\alpha0,
f:X\toY
Y,
a\inX,
\{a\}.
f = \sum_f_a
in the topology of pointwise convergence (that is, the sum is taken in the infinite product group
YX
While, formally, this requires a notion of sums of uncountable series, by construction there are, for every given
x,
\varphii,
\omega1
f:\left[0,\omega1\right)\to\left[0,\omega1\right]
f(\alpha)=1
§III.2.11.
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Series (mathematics)".
Except where otherwise indicated, Everything.Explained.Today is © Copyright 2009-2024, A B Cryer, All Rights Reserved. Cookie policy.