Solving Numerical & Algebraic Problems with Maxima Code
Classified in Physics
Written on in
English with a size of 5.12 KB
Numerical Methods for Solving Equations
1.1 Solving an Algebraic System
This code defines a function f(V) and then solves the system f([x,y,z]) = [x,y,z] to find its fixed points using the algsys command.
faux(x,y,z) := [(x^2)/2 + 1/3, x+y+z/4, x^2-y^2+(z^2)/4];
f(V) := faux(V[1], V[2], V[3]);
ecus : f([x,y,z]) - [x,y,z];
algsys(ecus, [x,y,z]), numer;1.2 Fixed-Point Iteration Method
This block implements the fixed-point iteration method. It starts with an initial guess (seed) and iteratively applies the function f until the norm of the difference between successive iterations is smaller than a given tolerance (10-14).
nor(V) := sqrt(V.V);
fpprec:100;
block(
semilla:[0,0,0],
while nor(f(semilla)-semilla) > 10^(-14) do
semilla:bfloat(f(semilla)),
semilla
);1.3 Newton's Method for Systems
Here, Newton's method is used to find the roots of the function g(V). The code defines the Jacobian matrix jaco(V) and the Newton iteration step. The loop continues until the norm of g(V) is below the tolerance.
gaux(x1,x2,x3) := [(x1^2)/2+1/3-x1, x1+x3/4, x1^2-x2*2+(x3^2)/4-x3];
g(V) := gaux(V[1], V[2], V[3]);
define(jaco(x,y,z), jacobian(g([x,y,z]), [x,y,z]));
jacoV(V) := jaco(V[1], V[2], V[3]);
newton(V) := transpose(V - invert(jacoV(V)) . g(V))[1];
block(
V:[0,-1,-1],
while nor(g(V)) > 10^(-14) do
V:bfloat(newton(V)),
V
);
%, numer;Polynomial Interpolation with Lagrange
3.1 Interpolating a Known Polynomial
This section defines a generic lagrange function for polynomial interpolation. It is then used to find the interpolating polynomial for f(x) = x5+x4+x3+x2+x+1 using six nodes. As expected, the result is the original polynomial.
lagrange(func, nodos) := block(
kill(L),
n:length(nodos),
L[i] := product(if i=j then 1 else (x-nodos[j])/(nodos[i]-nodos[j]), j, 1, n),
sum(func(nodos[i])*L[i], i, 1, n)
);
f(x) := x^5+x^4+x^3+x^2+x+1;
PL5 : lagrange(f, [0,1,2,3,4,5]), expand;3.2 Interpolating a Transcendental Function
The lagrange function is used to approximate the function g(x) = x + exp(cos(x)) over the interval [2, 10]. The resulting polynomial PLg is then plotted alongside the original function g(x) for comparison.
g(x) := x + exp(cos(x));
PLg : lagrange(g, [2,3,4,5,6,7,8,9,10]), numer;
PLg : expand(PLg);
wxplot2d([g, PLg], [x,2,10]);Matrix Functions via Eigenvalue Interpolation
4.1 Calculating the Cube Root of a Matrix
This example demonstrates how to compute the cube root of a matrix M. First, its eigenvalues are found. Then, Lagrange interpolation is used to find a polynomial pol(x) that matches the cubic root function x1/3 at the eigenvalues. Finally, this polynomial is evaluated with the matrix M to get the matrix cube root. The result is verified by cubing it and subtracting the original matrix.
/* Assume matrix M is defined elsewhere */
propios : eigenvalues(M)[1];
cubica(x) := x^(1/3);
pol : lagrange(cubica, propios), expand;
raizM : sum(coeff(pol, x, i) * M^^i, i, 0, 5);
raizM^^3 - M, ratsimp;7.1 Calculating Trigonometric Functions of a Matrix
Similar to the previous example, this code calculates sin(B) for a given 8x8 matrix B. It interpolates the sin(x) function at the matrix's eigenvalues. The resulting polynomial is then applied to the matrix B. The code also verifies the fundamental trigonometric identity sin2(B) + cos2(B) = I (Identity matrix), assuming cos(B) is computed similarly.
B : genmatrix(lambda([i,j], if i=j then i elseif j>i then 1 else 0), 8, 8);
propios : eigenvalues(B)[1];
f1(x) := sin(x);
f2(x) := cos(x);
pinterpolsin : lagrange(f1, propios), expand;
sinB : sum(coeff(pinterpolsin, x, i) * B^^i, i, 0, 7), expand;
/* Assume cosB is calculated similarly to sinB */
trigsimp(sinB . sinB + cosB . cosB);Optimization and Dynamic Programming
8.1 A Recursive Optimization Problem
This code defines a recursive function G[k,x], which represents a value function in a dynamic programming problem over 100 steps. It also defines a function mini that finds the minimum of a function using a ternary search algorithm. The goal is to find the value of x that minimizes the initial function G[0,x].
pago(k,x) := k*x;
G[k,x] := if k = 100 then -x + 155
else 99/100*(G[k+1,x]) + 1/100*(max(G[k+1,x], pago(k+1,x)));
mini(fu, a, b, tol) := block(
alfa:a+(b-a)/3,
beta:b-(b-a)/3,
if abs(a-b) < tol then
(a+b)/2
elseif fu(alfa) < fu(beta) then
mini(fu, a, beta, tol)
else
mini(fu, alfa, b, tol)
);
G[0,7], numer;
func(x) := G[0,x];
func(7), numer;
m1 : mini(func, -1000, 1000, 10^-4), numer;
ms : func(m1);