From: RAVI VARADHAN <rvaradhan_at_jhmi.edu>

Date: Tue, 08 May 2007 10:23:08 -0400

R-help_at_stat.math.ethz.ch mailing list

https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. Received on Tue 08 May 2007 - 14:49:48 GMT

Date: Tue, 08 May 2007 10:23:08 -0400

Paul,

> It seems that there is here a problem of reliability, as one never

*> knows whether the solution provided by R is correct or not. In the
**> case that I reported, it is fairly simple to see that the solution
**> provided by R (without any warning!) is incorrect, but, in general,
**> that is not so simple and one may take a wrong solution as a correct
**> one.
**>
**> Paul
**>
**>
**> On 5/8/07, Ravi Varadhan <rvaradhan_at_jhmi.edu> wrote:
**> > Your function, (x1-x2)^2, has zero gradient at all the starting
**> values such
**> > that x1 = x2, which means that the gradient-based search methods will
**> > terminate there because they have found a critical point, i.e. a
**> point at
**> > which the gradient is zero (which can be a maximum or a minimum or
**> a saddle
**> > point).
**> >
**> > However, I do not why optim converges to the boundary maximum, when
**> analytic
**> > gradient is supplied (as shown by Sundar).
**> >
**> > Ravi.
**> >
**> > ----------------------------------------------------------------------------
**> > -------
**> >
**> > Ravi Varadhan, Ph.D.
**> >
**> > Assistant Professor, The Center on Aging and Health
**> >
**> > Division of Geriatric Medicine and Gerontology
**> >
**> > Johns Hopkins University
**> >
**> > Ph: (410) 502-2619
**> >
**> > Fax: (410) 614-9625
**> >
**> > Email: rvaradhan_at_jhmi.edu
**> >
**> > Webpage:
**> >
**> >
**> >
**> > ----------------------------------------------------------------------------
**> > --------
**> >
**> >
**> > -----Original Message-----
**> > From: r-help-bounces_at_stat.math.ethz.ch
**> > [ On Behalf Of Paul Smith
**> > Sent: Monday, May 07, 2007 6:26 PM
**> > To: R-help
**> > Subject: Re: [R] Bad optimization solution
**> >
**> > On 5/7/07, Paul Smith <phhs80_at_gmail.com> wrote:
**> > > > I think the problem is the starting point. I do not remember the
**> > details
**> > > > of the BFGS method, but I am almost sure the (.5, .5) starting
**> point is
**> > > > suspect, since the abs function is not differentiable at 0. If
**> you
**> > perturb
**> > > > the starting point even slightly you will have no problem.
**> > > >
**> > > > "Paul Smith"
**> > > > <phhs80_at_gmail.com
**> > > > >
**> > To
**> > > > Sent by: R-help <r-help_at_stat.math.ethz.ch>
**> > > > r-help-bounces_at_st
**> > cc
**> > > > at.math.ethz.ch
**> > > >
**> > Subject
**> > > > [R] Bad optimization solution
**> > > > 05/07/2007 04:30
**> > > > PM
**> > > >
**> > > >
**> > > >
**> > > >
**> > > >
**> > > >
**> > > >
**> > > >
**> > > > Dear All
**> > > >
**> > > > I am trying to perform the below optimization problem, but getting
**> > > > (0.5,0.5) as optimal solution, which is wrong; the correct solution
**> > > > should be (1,0) or (0,1).
**> > > >
**> > > > Am I doing something wrong? I am using R 2.5.0 on Fedora Core 6
**> (Linux).
**> > > >
**> > > > Thanks in advance,
**> > > >
**> > > > Paul
**> > > >
**> > > > ------------------------------------------------------
**> > > > myfunc <- function(x) {
**> > > > x1 <- x[1]
**> > > > x2 <- x[2]
**> > > > abs(x1-x2)
**> > > > }
**> > > >
**> > > >
**> > optim(c(0.5,0.5),myfunc,lower=c(0,0),upper=c(1,1),method="L-BFGS-B",control=
**> > list(fnscale=-1))
**> > >
**> > > Yes, with (0.2,0.9), a correct solution comes out. However, how can
**> > > one be sure in general that the solution obtained by optim is correct?
**> > > In ?optim says:
**> > >
**> > > Method '"L-BFGS-B"' is that of Byrd _et. al._ (1995) which allows
**> > > _box constraints_, that is each variable can be given a lower
**> > > and/or upper bound. The initial value must satisfy the
**> > > constraints. This uses a limited-memory modification of the
**> BFGS
**> > > quasi-Newton method. If non-trivial bounds are supplied, this
**> > > method will be selected, with a warning.
**> > >
**> > > which only demands that "the initial value must satisfy the constraints".
**> >
**> > Furthermore, X^2 is everywhere differentiable and notwithstanding the
**> > reported problem occurs with
**> >
**> > myfunc <- function(x) {
**> > x1 <- x[1]
**> > x2 <- x[2]
**> > (x1-x2)^2
**> > }
**> >
**> > optim(c(0.2,0.2),myfunc,lower=c(0,0),upper=c(1,1),method="L-BFGS-B",control=
**> > list(fnscale=-1))
**> >
**> > Paul
**> >
**> > ______________________________________________
**> > R-help_at_stat.math.ethz.ch mailing list
**> >
**> > PLEASE do read the posting guide
**> > and provide commented, minimal, self-contained, reproducible code.
**> >
**>
**> ______________________________________________
**> R-help_at_stat.math.ethz.ch mailing list
**>
**> PLEASE do read the posting guide
**> and provide commented, minimal, self-contained, reproducible code.
*

R-help_at_stat.math.ethz.ch mailing list

https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. Received on Tue 08 May 2007 - 14:49:48 GMT

Archive maintained by Robert King, hosted by
the discipline of
statistics at the
University of Newcastle,
Australia.

Archive generated by hypermail 2.2.0, at Tue 08 May 2007 - 16:31:25 GMT.

*
Mailing list information is available at https://stat.ethz.ch/mailman/listinfo/r-help.
Please read the posting
guide before posting to the list.
*