Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    ISSN: 1573-2878
    Keywords: Mathematical programming ; function minimization ; method of dual matrices ; computing methods ; numerical methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract In Ref. 2, four algorithms of dual matrices for function minimization were introduced. These algorithms are characterized by the simultaneous use of two matrices and by the property that the one-dimensional search for the optimal stepsize is not needed for convergence. For a quadratic function, these algorithms lead to the solution in at mostn+1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn+2. In this paper, the above-mentioned algorithms are tested numerically by using five nonquadratic functions. In order to investigate the effects of the stepsize on the performances of these algorithms, four schemes for the stepsize factor are employed, two corresponding to small-step processes and two corresponding to large-step processes. The numerical results show that, in spite of the wide range employed in the choice of the stepsize factor, all algorithms exhibit satisfactory convergence properties and compare favorably with the corresponding quadratically convergent algorithms using one-dimensional searches for optimal stepsizes.
    Type of Medium: Electronic Resource
    Signatur Availability
    BibTip Others were also interested in ...
  • 2
    ISSN: 1573-2878
    Keywords: Two-point boundary-value problems ; differential equations ; Newton-Raphson methods ; computing methods ; numerical methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract A method based on matching a zero of the right-hand side of the differential equations, in a two-point boundary-value problem, to the boundary conditions is suggested. Effectiveness of the procedure is tested on three nonlinear, two-point boundary-value problems.
    Type of Medium: Electronic Resource
    Signatur Availability
    BibTip Others were also interested in ...
  • 3
    ISSN: 1573-2878
    Keywords: Calculus of variations ; optimal control ; computing methods ; numerical methods ; boundary-value problems ; modified quasilinearization algorithm ; nondifferential constraints
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract This paper considers the numerical solution of optimal control problems involving a functionalI subject to differential constraints, nondifferential constraints, and terminal constraints. The problem is to find the statex(t), the controlu(t), and the parameter π so that the functional is minimized, while the constraints are satisfied to a predetermined accuracy. A modified quasilinearization algorithm is developed. Its main property is the descent property in the performance indexR, the cumulative error in the constraints and the optimality conditions. Modified quasilinearization differs from ordinary quasilinearization because of the inclusion of the scaling factor (or stepsize) α in the system of variations. The stepsize is determined by a one-dimensional search on the performance indexR. Since the first variation δR is negative, the decrease inR is guaranteed if α is sufficiently small. Convergence to the solution is achieved whenR becomes smaller than some preselected value. In order to start the algorithm, some nominal functionsx(t),u(t), π and nominal multipliers λ(t), ρ(t), μ must be chosen. In a real problem, the selection of the nominal functions can be made on the basis of physical considerations. Concerning the nominal multipliers, no useful guidelines have been available thus far. In this paper, an auxiliary minimization algorithm for selecting the multipliers optimally is presented: the performance indexR is minimized with respect to λ(t), ρ(t), μ. Since the functionalR is quadratically dependent on the multipliers, the resulting variational problem is governed by optimality conditions which are linear and, therefore, can be solved without difficulty. To facilitate the numerical solution on digital computers, the actual time θ is replaced by the normalized timet, defined in such a way that the extremal arc has a normalized time length Δt=1. In this way, variable-time terminal conditions are transformed into fixed-time terminal conditions. The actual time τ at which the terminal boundary is reached is regarded to be a component of the parameter π being optimized. The present general formulation differs from that of Ref. 3 because of the inclusion of the nondifferential constraints to be satisfied everywhere over the interval 0⩽t⩽1. Its importance lies in that (i) many optimization problems arise directly in the form considered here, (ii) there are problems involving state equality constraints which can be reduced to the present scheme through suitable transformations, and (iii) there are some problems involving inequality constraints which can be reduced to the present scheme through the introduction of auxiliary variables. Numerical examples are presented for the free-final-time case. These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper.
    Type of Medium: Electronic Resource
    Signatur Availability
    BibTip Others were also interested in ...
  • 4
    ISSN: 1573-2878
    Keywords: Nonlinear programming algorithms ; numerical methods ; anti-jamming strategies
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract A convergence theory for a class of anti-jamming strategies for nonlinear programming algorithms is presented. This theory generalizes previous results in this area by Zoutendijk, Topkis and Veinott, Mangasarian, and others; it is applicable to algorithms in which the anti-jamming parameter is fixed at some positive value as well as to algorithms in which it tends to zero. In addition, under relatively weak hypotheses, convergence of the entire sequence of iterates is proved.
    Type of Medium: Electronic Resource
    Signatur Availability
    BibTip Others were also interested in ...
  • 5
    ISSN: 1573-2878
    Keywords: Fredholm integral equations ; iterative methods ; Neumann series ; numerical methods ; transport theory
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract The present paper extends the synthetic method of transport theory to a large class of integral equations. Convergence and divergence properties of the algorithm are studied analytically, and numerical examples are presented which demonstrate the expected theoretical behavior. It is shown that, in some instances, the computational advantage over the familiar Neumann approach is substantial.
    Type of Medium: Electronic Resource
    Signatur Availability
    BibTip Others were also interested in ...
  • 6
    ISSN: 1573-2878
    Keywords: Stereology ; statistics ; packing problem ; Abel integral equations ; Volterra integral equations ; numerical methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract We discuss the application of integral equations techniques to two broad areas of particle statistics, namely, stereology and packing. Problems in stereology lead to the inversion of Abel-type integral equations; and we present a brief survey of existing methods, analytical and numerical, for doing this. Packing problems lead to Volterra equations which, in simple cases, can be solved exactly and, in other cases, need to be solved numerically. Methods for doing this are presented along with some numerical results.
    Type of Medium: Electronic Resource
    Signatur Availability
    BibTip Others were also interested in ...
  • 7
    ISSN: 1573-2878
    Keywords: Function minimization ; nonlinear constraints ; nonlinear approximation ; numerical methods
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract An iterative technique is developed to solve the problem of minimizing a functionf(y) subject to certain nonlinear constraintsg(y)=0. The variables are separated into the basic variablesx and the independent variablesu. Each iteration consists of a gradient phase and a restoration phase. The gradient phase involves a movement (on a surface that is linear in the basic variables and nonlinear in the independent variables) from a feasible point to a varied point in a direction based on the reduced gradient. The restoration phase involves a movement (in a hyperplane parallel tox-space) from the nonfeasible varied point to a new feasible point. The basic scheme is further modified to implement the method of conjugate gradients. The work required in the restoration phase is considerably reduced when compared with the existing methods.
    Type of Medium: Electronic Resource
    Signatur Availability
    BibTip Others were also interested in ...
  • 8
    ISSN: 1573-2878
    Keywords: Integral equations ; numerical methods ; differential equations ; numerical integration
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract An initial-value method of Bownds for solving Volterra integral equations is reexamined using a variable-step integrator to solve the differential equations. It is shown that such equations may be easily solved to an accuracy ofO(10−8), the error depending essentially on that incurred in truncating expansions of the kernel to a degenerate one.
    Type of Medium: Electronic Resource
    Signatur Availability
    BibTip Others were also interested in ...
  • 9
    ISSN: 1573-2878
    Keywords: Optimal control ; numerical methods ; computing methods ; gradient methods ; gradient-restoration algorithms ; sequential gradient-restoration algorithms ; general boundary conditions ; nondifferential constraints ; bounded control ; bounded state
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract This paper considers the numerical solution of two classes of optimal control problems, called Problem P1 and Problem P2 for easy identification. Problem P1 involves a functionalI subject to differential constraints and general boundary conditions. It consists of finding the statex(t), the controlu(t), and the parameter π so that the functionalI is minimized, while the constraints and the boundary conditions are satisfied to a predetermined accuracy. Problem P2 extends Problem P1 to include nondifferential constraints to be satisfied everywhere along the interval of integration. Algorithms are developed for both Problem P1 and Problem P2. The approach taken is a sequence of two-phase cycles, composed of a gradient phase and a restoration phase. The gradient phase involves one iteration and is designed to decrease the value of the functional, while the constraints are satisfied to first order. The restoration phase involves one or more iterations and is designed to force constraint satisfaction to a predetermined accuracy, while the norm squared of the variations of the control, the parameter, and the missing components of the initial state is minimized. The principal property of both algorithms is that they produce a sequence of feasible suboptimal solutions: the functions obtained at the end of each cycle satisfy the constraints to a predetermined accuracy. Therefore, the values of the functionalI corresponding to any two elements of the sequence are comparable. The stepsize of the gradient phase is determined by a one-dimensional search on the augmented functionalJ, while the stepsize of the restoration phase is obtained by a one-dimensional search on the constraint errorP. The gradient stepsize and the restoration stepsize are chosen so that the restoration phase preserves the descent property of the gradient phase. Therefore, the value of the functionalI at the end of any complete gradient-restoration cycle is smaller than the value of the same functional at the beginning of that cycle. The algorithms presented here differ from those of Refs. 1 and 2, in that it is not required that the state vector be given at the initial point. Instead, the initial conditions can be absolutely general. In analogy with Refs. 1 and 2, the present algorithms are capable of handling general final conditions; therefore, they are suited for the solution of optimal control problems with general boundary conditions. Their importance lies in the fact that many optimal control problems involve initial conditions of the type considered here. Six numerical examples are presented in order to illustrate the performance of the algorithms associated with Problem P1 and Problem P2. The numerical results show the feasibility as well as the convergence characteristics of these algorithms.
    Type of Medium: Electronic Resource
    Signatur Availability
    BibTip Others were also interested in ...
  • 10
    ISSN: 1573-2878
    Keywords: Convergence ; iterative methods ; numerical methods ; characterization of convergence ; linear complementarity problem ; matrix splitting
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract Necessary and sufficient conditions are established for the convergence of various iterative methods for solving the linear complementarity problem. The fundamental tool used is the classical notion of matrix splitting in numerical analysis. The results derived are similar to some well-known theorems on the convergence of iterative methods for square systems of linear equations. An application of the results to a strictly convex quadratic program is also given.
    Type of Medium: Electronic Resource
    Signatur Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...