[go: up one dir, main page]

Menu

[r1142]: / coco / toolbox / todo.txt  Maximize  Restore  History

Download this file

624 lines (547 with data), 30.9 kB

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
before book release
-------------------
from Harry:
# Bug in covering_create, line 164: switch order of factors in
data.s*data.h to allow for generalization to arbitrary projection
conditions for higher-dimensional manifolds.
> not changed, the definition of the projection is through pr_cnd.TS
# Bug in CurveSegment, line 128: change to s = chart.TS’*base_chart.TS*s0
> no, is corect as is
# I am planning to write one chapter about adaptation and will use the
spectral approach to periodic orbit continuation as an example toolbox.
I believe we agreed to insert an additional state between flush and
predict called adapt that calls all toolboxes' adapt slot functions.
This function would need to recreate the tangent vector after all zero
problems and variable values had been updated. There would also be an
atlas algorithm function with default set to no adaptation but that
could be changed to adapt every few charts for example.
> added point to list below
# Do we need to call coco_set before coco.
> Yes, do not assign to coco directly. You may assign a whole structure
with coco_set though.
# Is beta always increasing in homotopy (cf. lorentzdemo, step 2).
> Should be for full quadratic condition, but may not be for the
linearised condition due to non-monotonous convergence when eigenvalues
are not bounded away from -1. ==> use damped FP iteration in update of M*.
>> We may not need the homotopy, damped Newton will do in one correction step
for full quadratic equation. Try with updates during Newton for linear
condition.
bugs:
frank@ceylon:~/.../toolbox/covering$ svn diff -x -w -r HEAD
Index: covering_create.m
===================================================================
--- covering_create.m (revision 1132)
+++ covering_create.m (working copy)
@@ -169,10 +169,11 @@
f = data.TS*(u-data.u0) - data.s*data.h;
case 2 % fix embedded test function along curve segment
- f = u(data.fixpar_idx) - data.fixpar_val;
+ f = zeros(data.dim,1);
+ f(1) = u(data.fixpar_idx) - data.fixpar_val;
TSjj = data.TS(data.jj,:);
for j=data.j
- f = [f; (data.TS(j,:)-data.ss(j)*TSjj)*(u-data.u0)]; %#ok<AGROW>
+ f(j) = (data.TS(j,:)-data.ss(j)*TSjj)*(u-data.u0);
end
case 3 % fix set of dim active parameters
@@ -223,7 +224,7 @@
data.h = pr_cond.h;
[data.ss data.jj] = max(abs(data.s));
- data.ss = sign(data.s(data.jj))*(1/data.ss)*data.s;
+ data.ss = (1/data.ss)*data.s(data.jj);
data.j = setdiff(1:data.dim, data.jj);
data.mode = 1;
- orth-function in CurveSegmentBase (preserves handedness)
- when parameters are overspecified but not present, just exclude from
output instead of throwing an error (empty value in bd)
- update cover_0d [atlas_0d], cover_1d_ct [atlas_1d] and
cover_1d_lsq [atlas_1d_lsq]
- curve segment should raise signal FSM_update(update) whenever pr_cond
is changed
- check_first_chart, ev_locate: pass chart and tangent to evhan, allow
only modification of chart data
? update slot of xfunc: correct fix_pars condition!
? copy tangent information in add_parameters
# introduce initialisation sequence that allows correct use of tangents
in check_first_chart as well as correct evaluation of test functions
at first chart
# rename set_MX to set_status
# check bd_col, arrays are not merged even though they seem to fit,
like in bd_col(bd, {'x' 'p'}) [fix committed to coco]
# during event handling (singular events): after chart_at compute
chart.p only if not present, because p might be interpolated as well.
# rename cseg.MXFlag to cseg.Status
# function bd_idx that returns list of indices of special points
# remove cseg in flush, pass cseg to predict only if present in opts
# modify FSM to allow for loop over several curve segments (if atlas holds
several curve segments and wants to have event handling handle every
segment separate)
> No, is not really necessary and would complicate FSM too much.
# add_parameters should take fid, not prefix as argument
# split CurveSegment in base class for event handling plus derived classes
for interpolation
# fix state_init: chart should be a copy of curr_chart after construction
of curve segment, correct passing of 'EP' and ep_flag
# chart_at should return h instead of using hscale
# update ignore_at somewhere (in flush)
# check add_BP and add_SP: multiple event with no event handler,
evidx must contain complete evgroup, therefore, should it not be
chart.pt_type = ev.point_type{events.evidx(1)};
# check the computation of h_scale -> move call to cut_cseg to add_BP.
Do the functions [BP|MX|SP]_locate become identical and can be merged?
# function for computation of s and t related to a projection condition
in CurveSegment (takes base_chart.x and pr_cnd.TS as arguments and
returns pr_cnd.s and base_chart.t; pr_cnd.x must lie on [projection
of u onto pr_cnd.TS] + h*pr_cnd.s; pr_cnd.x and pr_cnd.h are then
chosen by predictor); compute_tangent must use pr_cnd for computing
chart.t as well
> no, use tangent function to compute t, CurveSegment will contain a
function for checking the correctness of a projection condition
# check CurveSegment:119:131; compare with printout of pr. cond.
# add_SP returns idx (used by add_BP and passed to cut_seg)
# split cseg into atlas and cseg
# make pr_cond a property of cseg, allow construction of 0d- as well
as 1d- curve segments
# change check_first_point to provide function that returns a vector of
flags instead of removing directions explicitly
# check uniqueness of fid in add_func
# prefix may now be a cell aray of two strings for somewhat more
flexibility for generating function identifyers
> No, this is pretty useless
# allow to extract info of efunc using get_func_data, this will be
necessary in add_parameters to copy tangent information
# clean up syntax of get_func_data, only return elements explicitly
requested by a string argument; check 'cdata' option!
# remove all additional return arguments from add_func, access to
function data should be explicit through get_func_data
# allow
init->correct->add->check_first->flush->predict->...
as well as
init->add->check_first->flush->correct->predict->...
# change homotopy interval to [-1,1] in init_var_sol, make interval, beta0
and other options settable; use hom_opts structure for more flexibility,
not just hom_cont
# include determinant and condition computation in lsol
# rename all signal/slot related functions: coco_cb_data.m, coco_cb_exist.m
# handle boundary points with tangent direction tangent to boundary
# use pr_cnd correctly during event location
# cseg in output from member functions?
# Remove update call from init_atlas (look at Harry’s code!)
# Require predict to assign cseg.base_chart and cseg.curr_chart?
# Default implementations instead of abstract classes in CurveSegment
for non-minimal functions.
# Call to Newton (combined evaluation) FDF!
# use fdf in newton
# compute initial tangent correctly
# correct inheritance of boundary points (ptlist{end}) and special
points (ptlist{1})
# check that xfunc_update makes sense (using TS of chart etc.), not
all options seem possible with the current prediction setting; handling
of s is not correct, maybe predict should return a structure with data
for xfunc
# initialise options of opts.cont correctly in cseg constructors
# check empty domain with minimal code; I removed this code from
covering_create:54
% check for empty domain (bug: should this be in covering class?)
if pint(2)<=pint(1) || abs(pint(2)-pint(1))<=opts.corr.TOL
cont.ItMX = 0;
end
FSM:
# during event handling evaluate only test functions that are needed
# in subdivision: choose x such that event does actually occur inside the
computational domain
# change call to predictor to
[opts curr_chart] = opts.cseg.predict(opts);
# change update signal to
opts = coco_emit(opts, 'FSM.update', 'update', curr_chart, base_chart);
# clean inheritance + initialisation strategy of chart data using curr_chart
and the signal FSM.init_chart_data; handle ignore_at etc. with a
default strategy in FSM + allow modification from within atlas codes;
this should, for example, allow to store the value of the determinant
of the extended system and the parameter component of the tangent vector
as test functions for a branch- and a geometric fold point
# pass base_chart to FSM.update signal
# correct event_init (and other states) to work with ptlists with more than
two points, for example, coming from cover_1d_lsq, add chart.p for all
charts missing p (even if event handling is off, because this is part
of the output)
# reduce number of evaluations of test functions to necessary minimum,
compute chart.p only if not present and use chart.e as cache
# pass interpolated continuation tangent to monitor functions during event
handling (not to embedded monitor functions!)
# change locating of group events, locate "leading event" only (??) and use
Gauss-Newton method that can deal with singular matrices (??), other method:
keep curent strategy, but apply TOL to interpolation accuracy => test with
cont.TOL possible!
efunc:
- split coco_add_func into
* coco_add_func
* coco_add_func_after(deplist, @ctor, ...), ctor calls add_func
* coco_add_pending([satisfied = 'efunc.close'])
# add support for combined evaluation of F and DFDU
# introduce new argument cdata (chart data) in call to any efunc-function;
in efunc_F and efunc_DFDX allow different ways of calling a function
(data, data chart_data), [no: allow empty function definition (just add
toolbox and chart data)]
# remove the xidx-return argument and introduce new function set_func_data
instead; introduce "efunc.close" signal; check with example test functions
and the projection condition if two signals might be necessary (hope not!)
covering:
# rename covering_cover to CurveSegment and all other classas to Atlas*
# introduce property TOL
# rename signal "covering_update" to "FSM_update" (this is an FSM signal)
# linear solver must be constructed outside non-linear solver, which
should get the linear solver as an argument.
# try to find a way how a non-linear solver can store data in a chart
(for example: Broyden's method should be able to store JInv), one
possibility is to have all parts of the correction operate on the
chart, so it gets build up step by step; this way, also the linear
solver can store data (for example, the determinant):
introduce curr_chart in CurveSegment
# point_at function should return [x p t], where p and t are optional, to
allow interpolation of parameters and tangent space if available/useful
(for example, in cover_1d_lsq)
atlas_1d_min:
- add demo with events exactly on mesh points (straight line, h=1)
# new cover_1d_min_ev with BP and FP (tangent test)
coco:
# restrict direct access of any opts.* to core
# in coco_add_parameters: allow parameters to be added to different lists
(active, inactive, internal)
# use signals within coco_set to update class variables during run-time
# introduce "all" properties that every toolbox can use as global default
# coco_set and coco_get: move all settings to opts.settings after changing
all accesses to coco_get
# coco_set and coco_get function: everything goes to opts.settings; merge
only opts.settings and copy everything else; do not use coco_set/coco_get
for anything else than setting/reading options from settings
-----------------------------------------------------------------------------
Post-merge bugs:
tutorial text:
- use event handler to distinguish between Hopf-points and neutral saddles,
this is a non-trivial but instructive example for using event-handlers
from Harry:
- In lorentzdemo in Floquet folder, note initial residual in step 9.
bugs:
- test scaling routine in lsol_splu with large problems
- remesh function as option to add_func; during remeshing call these functions
in the sequence toolboxes were added to efunc; default function just moves
the variables by translating xidx and fidx, remeshing occurs either in
predict or add_chart in atlas, provide efunc-interface for saving and
restoring toolbox data (mesh info) by an atlas (simply save and restore the
efunc structure?
- update all corrector classes
- clean up interface to bddat; looks like bddat should become property
of AtlasBase and the functions should become member functions (including
print* and save* functions)
- acp_idx as part of argument when passing tangent, or function
get_acp_idx (for tesching uses)
- restrict write access to opts only to functions that really should have it;
for example, check all members of CurveSegment and AtlasBase
FSM:
- (??) pass curr_chart to all bd-functions
- develop correct strategy for branch-switching
efunc:
- when adding internal parameters allow positional arguments
'before', 'after', '1', 'end', only top-level toolboxes should add
internal parameters (does this really make sense ???)
- allow rectangular systems (option in add_func)
- add support for structured return value of efunc_DFDX
cover_1d_ct:
- flip order in ItMX: [left_ItMX right_ItMX]
- use sum of angles between tangents and secant for step size control;
this will lead to acceptable results even at inflection points
- detect LP and BP in covering; branch-switch parser at BPs
- better method for guess of initial tangent; see tests/nullspace
- demo that tests all branches of cover_1d_ct
coco:
- check semantics of coco_add_functionals; should match with both, matrix-vector
product and array-dot-product
-----------------------------------------------------------------------------
cover_1d_lsq:
- after add_point call function for performing measurements, which should be
interpolated and be available to monitor functions
bugs:
- check if rlab is a string in coco_read_solution (accept string?)
- masking of components of tangent is necessary for vareqn=track;
make this option of add_func; use this for step size control;
change add_point and MX_check back to using midpoint (?)
- boundary condition in variational problem is wrong! -> use condition
with M' = M_old'; more research on stabilisation of iteration necessary
- allow ode45 syntax for odes throughout all toolboxes; this is necessary
for mh_imfcont for passing function data to the embedding ode; get rid
of persistent variables in imf_create_ode
- expand kappa and enable heuristic adaptation via update event
cover1d/FSM:
- user defined error handler in state_correct?
- handle closed branches (???)
coverkd:
- better remeshing algorithm; only consider immediate neighbourhood of chart
to resize
- use anisotropic distance function to avoid detecting charts as neighbours
when they really are on different almost parallel sheets (see example sphere)
(use the angle between TS and distance vector as parameter)
- allow specification of components for bounding box algorithm to reduce
dimension of bounding box space
- make connection between atlas and bd to allow plotting using both data
structures (save user data in atlas and bd)
- paging of charts
- print EP if no further boundary points available
fdm:
- construct more specialised differentiation code for fixed sizes of argument
arrays that does not create index arrays all the time; this construction
consumes the main computation time (in ALCONT)
coco:
- create utils subtoolbox, move all interface functions to utils
collocation:
- explicit Jacobian is not used (demp pwlin)
- allow initial solution like in AUTO: user defines T and provides rescaled
solution over [0,1]; this allows T=0 segments for initialisation, which is
not supported right now
- allow individual parameters per segment and provide default conditions
for making all sets equal; this feature will reduce bandwidth and should
always be used; dimension of parameters should be equal per segment for
more efficient handling; provide index set that properly expands
parameters to avoid using repmat or kron all the time
- check inconsistency in kappa; implement integration weights
- include rescaling factors, parameters and possibly time (??) as
variables
- vectorise second order derivatives
- write functions for remeshing and further manipulations so that a full
set of functions is available for other developers.
- adaptation, bandwidth reduction and computation of Floquet multipliers
in a forthcoming publication?
coll_linsolve:
- develop and use bandwidth reduction
- bandwidth reduction on level of equations (add integral constraints
that will generally be split up; try to use periodic matrices)
- use block-decomposition with bordering as a preconditioner and post-iterate
the solution; this should allow for a dramatic speed-up; functions should
be allowed to create a (square) block for which a block-decomposition
will be employed; blocks should be named and collected under a prefix
===========================================================================
general:
- use mymatrix(:) or mymatrix(:)' instead of reshape to create row-
or column vector from arbitrary data.
- introduce a uniform naming convention for variables, for example,
for filenames such as FName, mfname, etc.
- associate sol with func and xsol with xfunc; better: add pointers to
dictionary entry of solution structures
- error tests in functions, in particular, check in and out arguments
- create correct data dependency tree to help clean up dependencies
and code
efunc: [coco_add_func]
- allow inclusion of granularity information to be used for bandwidth
reduction on the equation level (for linear functionals and integral
conditions (non-linear functionals))
- allow collection of functions in square blocks; use this information for
block-elimination + bordering algorithm as (constant) preconditioner
(cascaded Newton method)
mpbvp:
- allow parameters as variables (tori, Arnol'd tongues), introduce
appropriate index, which may be empty, and substitute and expand
list of parameters correctly
- compute interpolation error (on standard element)
- hybrid bvps: treat last boundary condition differently so that general
two-point hybrid bvps can be treated (last bc may depend on whole x,
not just boundary points like, for example, the periodicity condition),
specify as cell-array: {'hybrid', 'periodic'}; generalise periodic to
multi-segments but use only the very first and last boundary point;
initialisation in a loop over boundary conditions! compute bc in a loop
over boundary conditions
- compute proper L2-norm transforming the whole solution to [0 1],
such that the norm is independent of the number of segments
- allow parameters as additional variables (when more boundary
conditions than necessary are provided)? (OR: as user conditions)
- support all variants of vectorised/non-vectorised RHS and provided
derivatives
- include remeshing strategy:
NTST = [ [INIT MIN MAX] ; ... ]
- pmap and dmap sparse (?)
pocont:
- include amplitude as internal parameter in 'periodic'
coco:
- handle events of test functions that might have two zeros within
a single step (to avoid having to use an unreasonably small max.
step size); this requires holding two curve segments instead of one
and using a moving bracket (bracketing algorithm) -> add support for
bracketing to cover_1d and the fsm (flush_all on exit)
- indicate for each user function if bandwidth reduction is required,
for this, allow attaching properties to user functions via argunemt
pairs of type ('name', value) in coco_add_func
- allow to exclude vars/pars that occur multiple times through equations
like a=b; b=c from adaptation aso.; it is a bit confusing that certain
variables get higher wheights when they are duplicated for convenience
! Is this problematic: mesh point on event within TOL??
- weight vector for norms and scalar products,
for example, NWTN: norm(w.*d)<TOL
- is it possible to write a check-settings function to validate
arguments for coco_set?
cover_1d_cc:
- use Hermite extrapolation: poly_xcoeffs+poly_eval
- use projection condition with h=0 and predicted tangent
- when using qubic predictor, reduce step size until all criteria
are met -> should produce no (or extremely few) rejected steps
===========================================================================
from e-mails:
==================
> Something else to consider: in coco_print_data, the save routines are only
> used every so often. This is fine until you want to use a user function to
> add extra information to the bifurcation diagram... Because the MPBVP save
> routines add extra information to the opts.sol structure (i.e. the
> solution in a nice shape) the user function can only get access to this
> extra data on the (few) occasions that the data is saved. This is quite
> annoying if you want to define an extra solution norm, since I don't want
> to have to cut-and-paste the mpbvp_save_full code into my user function.
The data you like to use is non-standard data and should not be accessed
by user functions. In fact, in the current implementation I eliminated the
possibility to access this data. A user function should not depend on it.
The correct solution is as follows: if some particular bit of code is
useful for users, we should write a public utility function that makes its
functionality available to everyone.
In fact, we really should start to collect ideas for such functions in a
spirit similar to the functionality of rauto. Such functions would be
useful for coco as well. Maybe a default plot routine for bifurcation
diagrams? Or routines for extracting data about special points? At the
moment we do all this making out hands dirty. If we provide functions for
standard tasks, we also make user code independent of future changes to the
data format. It seems worth to spend some time on that.
==================
> I don't have a problem with the initial tangent computation and
> understand it completely. I am just wondering why the primary
> continuation parameter is the first element of opts.cont.pidx. I don't
> see why we need to do this. Instead, to be consistent with everything
> else we have done, I think there should be an additional index vector
> which specifies which elements of opts.cont.pidx are primary
> continuation parameters.
>
> For example, when I do continuation with the demo file as written, it
> introduces two eta variables, the first of which is set to PAR(2) and
> the second to H. On the other hand, if I exchange H for PAR(2) and do
> continuation in H instead of PAR(2), it again introduces two eta
> variables, the first of which is set to H and the second to PAR(2).
> This doesn't make sense to me. I think the eta variables should be H
> and PAR(2) in that order in both cases, but that there should be an
> additional index vector that tells you which of these is the primary
> continuation parameter.
>
> This is obviously not a big deal and certainly doesn't change the
> functionality. Either way is compatible with the write-up as I do not
> identify a primary continuation parameter. I just think it is cleaner.
I don't think that the current implementation should be changed. The active
continuation parameters (all parameters that are added to the continuation)
are, by default, added in the order they are defined with add_func. The
reordering that you observe is in fact not a reordering, but an exchange
of parameters. I decided to do this exchange explicitly, because it might
affect parameters in different sets of parameter types (inactive and active,
for example). To me, the most natural way seems to perform this exchange
exactly as defined by the user with 'xchg_pars' and with the argument PARS
passed to cover1d. This may not be very obvious for short parameter lists,
but becomes clear if we have long lists of parameters.
There is no disadvantage in doing this exchange explicitly and I also dont
agree that it is contrary to the overall philosophy. In the code this
information is used in two functions only: 'state_init' and
'state_locate_cont.' At these points it is most natural to have the
explicit reordering at hand, instead of using something like
'pidx(pperm(1)),' because we actually never need the inverse permutation
of the one stored in 'pperm.' Note that 'pperm' would not only contain
the primary continuation parameters, but all parameters, because we might
need to fix any set of parameters for locating special points.
==================
> As a little help to me, I knocked up a dependency graph for COCO with
> Matlab's depfun. You can find it at
> <http://seis.bris.ac.uk/~db9052/helpful.tgz>; it includes all the help
> strings as well. Try looking at the coco.html file. The code to generate
> it is included. (Add coco to the path then do "data = gendepgraph([],2);
> writedepmindmap('~/coco.mm',data,'COCO',1);" to generate a mindmap which
> can then be fed into Freemind <http://freemind.sourceforge.net/>).
I didn't find it too useful. The main problem is, that it does not catch
the heavy use of virtual functions as in our finite-state machines. For
this deficiency, I would prefer not to add it to the official
documentation. We could include information on how to generate it, but
I wouldn't go any further.
==================
> A number of comments regarding set_parival:
>
> Line 45: what if no such argument is given?
In this case, argument PARS must be empty and set_parival exits in line 38.
> Line 67: what if vals is empty?
Fixed.
==================
> Just curious: when you want to reshape a matrix to be a column or row
> vector, is there any particular reason you use reshape? You can instead
> use mymatrix(:) or mymatrix(:)' which is twice as fast.
Didn't expect this to be faster. I thought that reshape just assigns some
integers, while your code ought to expand and assign the whole array. I
will check this out and use it in the future.
==================
> Just been doing some more investigations of this ill-conditioned
> Jacobian... I have some old code that computes the same thing but doesn't
> have this problem - in fact the Jacobian it generates has a condition
> number of ~10^5. In contrast, the code based on MP-BVP for the same
> conditions generates a Jacobian with condition number ~10^12. The only
> difference between the two is that in my original code I implicitly
> enforce continuity between collocation segments whereas in the MP-BVP code
> it is explicitly enforced. Just something to think about... If I get any
> time (which I probably won't as I'm trying to generate results for
> ECCOMAS very rapidly!) I'll look into changing MP-BVP to enforce
> implicitly and see what happens.
I put it on the todo list. I will check this as well, but am not really
convinced yet. With the explicit continuity condition we only expand the
system's matrix with unit matricies. This expanded matrix is reducible and
it is possible to reorder it in a way that shows that it should have the
same condition number as the reduced matrix. This is a bit mysterious and
could be related to some problems with UMFPACK. This would affect 'condest'
and other functions. Of course, if matlab cannot handle the larger matix
properly, we should reduce the system.
==================
> In case you are interested, I've implemented a wrapper for the Matlab
> fsolve routines so that COCO can use fsolve as the corrector (since it is
> more robust than the Newton with line-search method). It doesn't quite
> work like the coco_nwtn_step function since it does the complete
> correction in a single step (as far as COCO is concerned). Does this
> matter at all? Also, it does not implement the step start callback
> function since fsolve does not have provision for this (but it does allow
> the step end callback - but opts cannot be changed in this).
>
> Shall I import it into the SVN? If so, would you rather it went in the
> coco directory or in a directory of it's own?
This wrapper should be a toolbox of its own, consider it a first
contributed toolbox! Essentially, this was the idea of coco: to make it
possible to add such extensions without having to modify the core. Of
course, we can ship it together with coco (if we are happy with its
quality :). Just prepare some documentation and put it on your publication
list.
==================
> Also, just wondering: the opts.nwtn.end_callback is only called if the
> Newton iteration is successful - perhaps this should be called even if it
> is not successful? It would then allow parameters to be tweaked and the
> correction re-tried.
No, this is correct. In the case that It >= ItMX an exception is thrown.
If this exception is cought at the correct place, this implicitly carries
the information that the iteration needs to be re-initiated, including
the tweaking of parameters. If this exception is not cought, then the
computation is terminated anyway. The 'catch' part of the error handler
is the correct place to do any tweaking. Essentially, the 'begin_callback'
ought to do precisely this!
==================
> Minor comment: on line 10 in bddat_init, it should probably say
>
> if ~isfield(opts.efunc, 'op_idx')
No, this is correct. We copy this information from 'efunc' to 'cont' to
indicate that the initialisation has been done, that is, that all the other
fields we create further down do exist.
==================
> OK. I have made similar graphs and was in the process of working on
> the continuer section. In particular, I had asked you about the
> possibility of using a default event handler to simplify
> locate_events.
Following up on our phone conversation on this: I did not split
locate_events. I don't deem it important. It is a quite long function, but
it does not seem to split naturally. I would like to keep it in one piece.
> In particular, it
> appears to me that the event handling part of cover1d is its own
> finite-state machine embedded in the big finite-state machine.
This observation is correct and I used it already to simplify my drawings
of the FSM. In fact, we have two finite-state sub-machines: state_correct
and the locate_events part.