An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
• numerical methods  (68)
• Mathematical programming  (41)
Collection
Keywords
Publisher
• 1
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: Mathematical programming ; function minimization ; method of dual matrices ; computing methods ; numerical methods
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract In Ref. 2, four algorithms of dual matrices for function minimization were introduced. These algorithms are characterized by the simultaneous use of two matrices and by the property that the one-dimensional search for the optimal stepsize is not needed for convergence. For a quadratic function, these algorithms lead to the solution in at mostn+1 iterations, wheren is the number of variables in the function. Since the one-dimensional search is not needed, the total number of gradient evaluations for convergence is at mostn+2. In this paper, the above-mentioned algorithms are tested numerically by using five nonquadratic functions. In order to investigate the effects of the stepsize on the performances of these algorithms, four schemes for the stepsize factor are employed, two corresponding to small-step processes and two corresponding to large-step processes. The numerical results show that, in spite of the wide range employed in the choice of the stepsize factor, all algorithms exhibit satisfactory convergence properties and compare favorably with the corresponding quadratically convergent algorithms using one-dimensional searches for optimal stepsizes.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 2
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: Two-point boundary-value problems ; differential equations ; Newton-Raphson methods ; computing methods ; numerical methods
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract A method based on matching a zero of the right-hand side of the differential equations, in a two-point boundary-value problem, to the boundary conditions is suggested. Effectiveness of the procedure is tested on three nonlinear, two-point boundary-value problems.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 3
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: Calculus of variations ; optimal control ; computing methods ; numerical methods ; boundary-value problems ; modified quasilinearization algorithm ; nondifferential constraints
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract This paper considers the numerical solution of optimal control problems involving a functionalI subject to differential constraints, nondifferential constraints, and terminal constraints. The problem is to find the statex(t), the controlu(t), and the parameter π so that the functional is minimized, while the constraints are satisfied to a predetermined accuracy. A modified quasilinearization algorithm is developed. Its main property is the descent property in the performance indexR, the cumulative error in the constraints and the optimality conditions. Modified quasilinearization differs from ordinary quasilinearization because of the inclusion of the scaling factor (or stepsize) α in the system of variations. The stepsize is determined by a one-dimensional search on the performance indexR. Since the first variation δR is negative, the decrease inR is guaranteed if α is sufficiently small. Convergence to the solution is achieved whenR becomes smaller than some preselected value. In order to start the algorithm, some nominal functionsx(t),u(t), π and nominal multipliers λ(t), ρ(t), μ must be chosen. In a real problem, the selection of the nominal functions can be made on the basis of physical considerations. Concerning the nominal multipliers, no useful guidelines have been available thus far. In this paper, an auxiliary minimization algorithm for selecting the multipliers optimally is presented: the performance indexR is minimized with respect to λ(t), ρ(t), μ. Since the functionalR is quadratically dependent on the multipliers, the resulting variational problem is governed by optimality conditions which are linear and, therefore, can be solved without difficulty. To facilitate the numerical solution on digital computers, the actual time θ is replaced by the normalized timet, defined in such a way that the extremal arc has a normalized time length Δt=1. In this way, variable-time terminal conditions are transformed into fixed-time terminal conditions. The actual time τ at which the terminal boundary is reached is regarded to be a component of the parameter π being optimized. The present general formulation differs from that of Ref. 3 because of the inclusion of the nondifferential constraints to be satisfied everywhere over the interval 0⩽t⩽1. Its importance lies in that (i) many optimization problems arise directly in the form considered here, (ii) there are problems involving state equality constraints which can be reduced to the present scheme through suitable transformations, and (iii) there are some problems involving inequality constraints which can be reduced to the present scheme through the introduction of auxiliary variables. Numerical examples are presented for the free-final-time case. These examples demonstrate the feasibility as well as the rapidity of convergence of the technique developed in this paper.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 4
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: Mathematical programming ; nonlinear programming ; penalty-function methods ; convergence rate ; method of centers
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract Convergence of a method of centers algorithm for solving nonlinear programming problems is considered. The algorithm is defined so that the subproblems that must be solved during its execution may be solved by finite-step procedures. Conditions are given under which the algorithm generates sequences of feasible points and constraint multiplier vectors that have accumulation points satisfying the Fritz John or the Kuhn-Tucker optimality conditions. Under stronger assumptions, linear convergence rates are established for the sequences of objective function, constraint function, feasible point, and multiplier values.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 5
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: Mathematical programming ; method of multipliers ; penalty function methods ; inequality constraints
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract This paper deals with the numerical solution of the general mathematical programming problem of minimizing a scalar functionf(x) subject to the vector constraints φ(x)=0 and ψ(x)≥0. The approach used is an extension of the Hestenes method of multipliers, which deals with the equality constraints only. The above problem is replaced by a sequence of problems of minimizing the augmented penalty function Ω(x, λ, μ,k)=f(x)+λ T φ(x)+kφ T (x)φ(x) −μ T $$\tilde \psi$$ (x)+k $$\tilde \psi$$ T (x) $$\tilde \psi$$ (x). The vectors λ and μ, μ ≥ 0, are respectively the Lagrange multipliers for φ(x) and $$\tilde \psi$$ (x), and the elements of $$\tilde \psi$$ (x) are defined by $$\tilde \psi$$ (j)(x)=min[ψ(j)(x), (1/2k) μ(j)]. The scalark〉0 is the penalty constant, held fixed throughout the algorithm. Rules are given for updating the multipliers for each minimization cycle. Justification is given for trusting that the sequence of minimizing points will converge to the solution point of the original problem.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 6
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: Nonlinear programming algorithms ; numerical methods ; anti-jamming strategies
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract A convergence theory for a class of anti-jamming strategies for nonlinear programming algorithms is presented. This theory generalizes previous results in this area by Zoutendijk, Topkis and Veinott, Mangasarian, and others; it is applicable to algorithms in which the anti-jamming parameter is fixed at some positive value as well as to algorithms in which it tends to zero. In addition, under relatively weak hypotheses, convergence of the entire sequence of iterates is proved.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 7
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: Mathematical programming ; optimization theory ; minimization algorithms
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract In this paper, we prove a theorem of convergence to a point for descent minimization methods. When the objective function is differentiable, the convergence point is a stationary point. The theorem, however, is applicable also to nondifferentiable functions. This theorem is then applied to prove convergence of some nongradient algorithms.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 8
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: Fredholm integral equations ; iterative methods ; Neumann series ; numerical methods ; transport theory
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract The present paper extends the synthetic method of transport theory to a large class of integral equations. Convergence and divergence properties of the algorithm are studied analytically, and numerical examples are presented which demonstrate the expected theoretical behavior. It is shown that, in some instances, the computational advantage over the familiar Neumann approach is substantial.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 9
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: Stereology ; statistics ; packing problem ; Abel integral equations ; Volterra integral equations ; numerical methods
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract We discuss the application of integral equations techniques to two broad areas of particle statistics, namely, stereology and packing. Problems in stereology lead to the inversion of Abel-type integral equations; and we present a brief survey of existing methods, analytical and numerical, for doing this. Packing problems lead to Volterra equations which, in simple cases, can be solved exactly and, in other cases, need to be solved numerically. Methods for doing this are presented along with some numerical results.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
• 10
Electronic Resource
Springer
ISSN: 1573-2878
Keywords: Function minimization ; nonlinear constraints ; nonlinear approximation ; numerical methods
Source: Springer Online Journal Archives 1860-2000
Topics: Mathematics
Notes: Abstract An iterative technique is developed to solve the problem of minimizing a functionf(y) subject to certain nonlinear constraintsg(y)=0. The variables are separated into the basic variablesx and the independent variablesu. Each iteration consists of a gradient phase and a restoration phase. The gradient phase involves a movement (on a surface that is linear in the basic variables and nonlinear in the independent variables) from a feasible point to a varied point in a direction based on the reduced gradient. The restoration phase involves a movement (in a hyperplane parallel tox-space) from the nonfeasible varied point to a new feasible point. The basic scheme is further modified to implement the method of conjugate gradients. The work required in the restoration phase is considerably reduced when compared with the existing methods.
Type of Medium: Electronic Resource
Signatur Availability
Others were also interested in ...
Close ⊗