Using the NAG Toolbox for MATLAB

The NAG Toolbox makes the full functionality of the NAG Library available through MATLAB. An advantage of calling NAG via MATLAB is that many routine arguments become optional or unnecessary, thus making code easier to read and maintain.

All the NAG Library documentation - in printed form that's seventeen large volumes taking up a full yard of shelf space - has been converted to MATLAB help format, and so it is simple to access via MATLAB's usual documentation facilities. Included in the documentation for each NAG Library routine is example MATLAB code showing how to use the routine.

To show how easy it is to use the new Toolbox, we demonstrate how to call some popular NAG routines, and use MATLAB's plotting facilities to view the results.

Note: The code examples in this article have been extracted from demo scripts, and will not necessarily work properly if cut and pasted from this page into MATLAB. The full version of the scripts, which have been used to make the figures in this article, are available in this archive.

The S chapter - Bessel functions

We start with some of the simplest possible NAG calls. Among many other special functions, the S chapter contains the Bessel functions Y0(x), Y1(x), J0(x) and J1(x). Following the usual NAG naming convention, these are in routines s17ac, s17ad, s17ae and s17af respectively.

Here's the code, including some plotting commands:

 
% Evaluate the Bessel functions at a set of points in x
x = [0.125, 0.25 : 0.25 : 40];
for i = 1 : length(x)
[y0(i), ifail] = s17ac(x(i));
[y1(i), ifail] = s17ad(x(i));
[j0(i), ifail] = s17ae(x(i));
[j1(i), ifail] = s17af(x(i));
end

% Plot the results. Note that we omit the first two values of Y1(x)
% because they are large and would spoil the scaling of the graph.
hold on;
plot(x, y0, 'Color', 'red');
plot(x(3:length(x)), y1(3:length(x)), 'Color', 'green');
plot(x, j0, 'Color', 'blue');
plot(x, j1, 'Color', [1.0 1.0 0.0]);

and the results are shown in this picture:

Figure 1: the Bessel functions

The MATLAB script for this demo is available as the file NAGToolboxDemos/Special_functions/s17a_demo.m, distributed in this archive.

The E02 Chapter - Fitting a surface with bicubic splines

A common task in many areas of science is to find a function which approximates a set of data points. The data may contain random perturbations, such as measurement errors, which it is desirable to smooth out.

e02dc computes a bicubic spline approximation to a set of data values, given on a rectangular grid in the x-y plane. A single parameter is specified to control the trade-off between closeness of fit and smoothness of fit.

A small value for this smoothing parameter, named s in e02dc, will give a closer fit to the data, but the exact meaning of “small” in this context may vary according to the particular data set. In theory, a value of s = 0 will produce an interpolating spline, though in practise that may cause problems of oscillation around the original data.

 
% Create an arbitrary data set consisting of a plane with three peaks
x = [0; 0.5; 1; 1.5; 2; 2.5; 3; 3.5; 4; 4.5; 5];
y = [0; 0.5; 1; 1.5; 2; 2.5; 3; 3.5; 4];
f = [1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1;
1; 1; 1.1; 1.2; 1.1; 1; 1; 1; 1; 1; 1;
1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1;
1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1;
1; 1; 1; 1; 1; 1.075; 1.15; 1.075; 1; 1; 1;
1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1;
1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1.1;
1.2; 1.1; 1; 1; 1; 1; 1; 1; 1; 1;
1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1];

% The smoothing parameter. A smaller value of s gives a closer fit.
s = 0.0175593;

% Set the coordinates of the rectangular grid.
px = 0 : 0.1 : 5.0;
py = 0 : 0.1 : 4.0;

% Initialize some quantities required by e02dc
start = 'C';
nx = int32(0);
lamda = zeros(15,1);
mu = zeros(13,1);
ny = int32(0);
wrk = zeros(592, 1);
iwrk = zeros(51, 1, 'int32');

% Compute the bicubic spline approximant with e02dc
[nxOut, lamdaOut, nyOut, muOut, c, fp, wrkOut, iwrkOut, ifail] = ...
e02dc(start, x, y, f, s, nx, lamda, ny, mu, wrk, iwrk);

% Evaluate the spline on a rectangular grid using e02df
[ff, ifail] = e02df(px, py, lamdaOut(1:nxOut), muOut(1:nyOut), c);

Figures 2 and 3 show the original data, and the data approximated by a smooth bicubic spline.

Graph of data to be fitted by bicubic spline
Figure 2: data to be fitted by bicubic smoothing spline

the bicubic spline evaluated on a rectangular grid
Figure 3: the bicubic spline evaluated on a rectangular grid

The MATLAB script for this demo is available as the file NAGToolboxDemos/Curve_and_surface_fitting/e02dc_demo.m, distributed in this archive.

The E04 Chapter - Minimization of a function

NAG routine e04wd is designed to find a minimum point of an arbitrary smooth function, which may be subject to linear and nonlinear constraints, using a sequential quadratic programming (SQP) method - see the e04wd document for more details of the method.

e04wd will be used to find a minimum of the well-known Rosenbrock function, which is a 2D function given by

  f(x,y) = 100*(y-x*x)^2 + (1-x)^2

Solving Rosenbrock's function is an unconstrained problem - without hindrance, x and y may take any values at the minimum point.

Because of the “banana shaped valley” contours characteristic of Rosenbrock's function, as shown in the figures below, the function is often used as a test problem for minimization routines like e04wd. Simplistic algorithms such as a basic “steepest descent” method have a hard time following the shallow curving valley floor in their quest for the minimum point, but the NAG routine has no trouble.

A feature of e04wd is that the first derivatives of the function to be minimized are not required to be known analytically. If they are not known, or are hard to determine, e04wd can estimate them using difference approximations. If the user is able to supply them, though, convergence of the algorithm is likely to be faster and more robust.

The function to be evaluated is supplied to e04wd through a MATLAB .m file - here we name the file objfun.m. This routine may also evaluate the first derivatives of the objective function if requested to. What if the derivatives are not available? Simply don't compute them; e04wd is smart enough to notice whether we did or not, and will estimate them itself.

Here's what objfun.m looks like:

 
function [mode, objf, grad, user] = ...
objfun(mode, n, x, grad, nstate, user)
objf = 100*(x(2)-x(1)^2)^2 + (1 - x(1))^2;
if (mode == 1 || mode == 2)
% Derivatives: grad(i) is the derivative with
% respect to x(i), for i = 1, 2.
grad(1) = 2*(1-200*x(2))*x(1) + 400*x(1)*x(1)*x(1) - 2;
grad(2) = 200*(x(2) - x(1)*x(1));
end

And here's how to call e04wd:

 
% There are no linear or nonlinear constraints
a = [];
istate = zeros(2, 1, 'int32');
ccon = [];
cjac = [];
clamda = zeros(2,1);
hess = zeros(2,2);

% Arbitrary bounds on the variables
bl = [-10; -10];
bu = [10; 10];

% Initial guess at solution
x = [-2.75; 2];

% Initialize the minimization routine e04wd using e04wc
[iw, rw] = e04wc();

% Array user can contain anything we want. It lets us
% communicate with objfun.
user = [0,0,0,0,0];

% Solve the problem using NAG routine e04wd.
[majits, istateOut, cconOut, cjacOut, clamdaOut, objf, grad, ...
hessOut, xOut, iwOut, rwOut, user] = ...
e04wd(a, bl, bu, 'confun', 'objfun', istate, ...
ccon, cjac, clamda, hess, x, iw, rw, 'user', user);

Note that as well as the function in objfun.m, e04wd also requires a function confun.m to evaluate any nonlinear constraint functions. In this case, we have no constraint functions so we can supply a dummy for this function.

Also note that for clarity, all the MATLAB commands used for plotting the contours and results are omitted.

The figure below shows the path to the minimum of Rosenbrock's function when analytical derivatives are supplied. Every time the objective function is evaluated, the evaluation point is plotted. The starting point is shown by a blue circle, and the target minimum shown by a green circle is at the point (1,1). Notice how the path traced backtracks and corrects itself on the way to the minimum.

Path to the minimum of Rosenbrock's function when derivatives are known
Figure 4: Path to the minimum of Rosenbrock's function when derivatives are known

To illustrate how e04wd still works even if derivatives are not supplied, we can replace the function in objfun.m by this code:

 
function [mode, objf, grad, user] = ...
objfun(mode, n, x, grad, nstate, user)
objf = 100*(x(2)-x(1)^2)^2 + (1 - x(1))^2;
% We don't compute derivative information
end

Path to the minimum of Rosenbrock's function when derivatives are not supplied
Figure 5: Path to the minimum of Rosenbrock's function when derivatives are not supplied

This time the lack of derivative information makes it harder for e04wd. The routine must estimate derivatives using extra evaluations of the objective function. In total, 231 evaluations are now required, as opposed to the 54 evaluations when derivatives were available. In addition, the minimum value obtained is not quite as good, though still a good approximation to the exact solution of 0.0.

The moral of the story? Supply derivatives if you can, but let the NAG routine do it for you if you cannot!

The MATLAB script for this demo is available as the file NAGToolboxDemos/Minimization/e04wd_demo.m, distributed in this archive.