欢迎来到三一办公! | 帮助中心 三一办公31ppt.com(应用文档模板下载平台)
三一办公
全部分类
  • 办公文档>
  • PPT模板>
  • 建筑/施工/环境>
  • 毕业设计>
  • 工程图纸>
  • 教育教学>
  • 素材源码>
  • 生活休闲>
  • 临时分类>
  • ImageVerifierCode 换一换
    首页 三一办公 > 资源分类 > DOC文档下载  

    毕业设计论文 外文文献翻译 数学专业.doc

    • 资源ID:4027177       资源大小:621KB        全文页数:14页
    • 资源格式: DOC        下载积分:8金币
    快捷下载 游客一键下载
    会员登录下载
    三方登录下载: 微信开放平台登录 QQ登录  
    下载资源需要8金币
    邮箱/手机:
    温馨提示:
    用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)
    支付方式: 支付宝    微信支付   
    验证码:   换一换

    加入VIP免费专享
     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    毕业设计论文 外文文献翻译 数学专业.doc

    重 庆 理 工 大 学文 献 翻 译二级学院 数学与统计学院 译文:摘自:The Newton Raphson Algorithm for FunctionOptimizationKevin QuinnAssistant ProfessorDepartment of Political Science andThe Center for Statistics and the Social SciencesBox 354322, Padelford HallUniversity of WashingtonSeattle, WA 98195-4322October 25, 2001一、引言通过这个课程的学习我们将有兴趣计算最大似然估计(极大似然估计)。例如我们常常观察到的复杂的非线性函数的数据。因此,通过我们的计算封闭形式的去表达这极大似然估计的形式一般是不会存在模型的牛顿拉夫森算法是一个迭代的过程,可用于计算出极大似然估计。其背后的算法的基本思想的内容。首先,围绕一些初步的参数值构造一个二次近似逼近的有利函数(希望能接近最大似然估计)。其次是,调整参数值让其最大限度地提高二次近似。此过程再不断的重复进行,直到参数值稳定。这就说明开始容易想象出一个函数遇到最大化的一个变量。在这种情况下开发,我们转而更为一般的情况下最大化的一个变量k的函数 。二、牛顿拉夫森算法求变量1的函数的最大值2、1泰勒系列的逼近问题 牛顿拉夫逊算法的第一部分的发展是设计一个近似函数表示似然函数就可以很容易的最大化的分析。要做到这一点,我们需要利用泰勒定理。定理1(泰勒定理(1维)假设函数f次可微的开区间I上的,对于任意的一点到在I区间上存在的一点在到上例如:. (1) 他可以表示成为从到的方程的高阶项从到更快于从到。这就意味着(最小值)这被称作一阶泰勒的近似函数f在x上的,小的值可以构建一个更准确的逼近函数:请注意第一阶泰勒的近似可以重写为被称为一个二阶泰勒的近似函数f在上的值如: 从到.这凸显一个事实,即一阶泰勒的近似的线性函数在上的。同样的,二阶泰勒的近似值可以被改写成为:当,且。这凸显出的一个事实,即是二阶泰勒近似值是在上的第二阶多项式。2、2查找到的其最大值的二阶多项式假设出我们想要找出的值能最大化的首先,我们计算出的一阶导数的函数为: 我们了解到这,当的值是时,其中函数的值达到最大,换句话说,我们都知道 求解我们发现。第二阶的条件就是。这意味着的值将是最大无论什么时候当.2、3牛顿拉夫森的算法假设我们想要找到的值当最大化的二次连续可微的函数的值。记得当,且。这就意味着:一阶条件的(记为)值能最大化就是是:这就意味着。换而言之就是, 在的函数值能最大化的二阶泰勒近似值为函数 考虑到这一点,我们可以指定用于一维的函数优化问题的牛顿拉夫森算法。算法2、1:牛顿拉夫森一维的(,公差)发表评论:找出求的值能最大化的函数:当Do回到注意事项:注意牛顿拉夫森算法,不检查的二阶的必要条件为是最大化。这就意味着,如果你给一个错的开始x的值的算法,你可能最终是最小的,而不是一个最大的。2、4例如:计算二项式抽样模型的极大似然估计看到牛顿拉夫森算法的工程实践中如何让看一个简单的示例,二项式抽样与分析解的简单的模型。我们的对数似然函数是: 当为样本容量时,就是成功的次数,是一个取得成功的概率。一阶导数对数似然函数是:二阶导数对数似然函数就是:解析,我们知道的最大似然估计是:。举一个例子,假设且。解析,我们知道的最大似然估计是。让我们来看看如何在这种情况下解出牛顿拉夫森算法。 我们首先设置公差级别。在这种情况下的,让将它设置为0.01(在实践中你可能想要的东西更接近0.00001)。下一步,我们初始猜测的最大似然估计(记为)。假设。的这是在绝对值大于0.01的公差。因此我们设置为:。 现在我们计算出,它仍然是在绝对值大于的公差。因此我们设置为:是约等于是绝对值小于的公差,这样我们就可以停止了。牛顿拉夫森算法返回pi的的值等于到接近0.3994,这是合理的分析值0.40。请注意,我们可以设置的容忍水平接近的牛顿拉夫森过程更准确(机器精密度范围内)。三、牛顿拉夫森算法求最大的变量的函数3、1泰勒级数逼近问题维度考虑函数至少有两次的连续可微。假设且。然后给出一阶泰勒近似值在函数上的一个被写为:给出二阶泰勒近似值在函数上的一个被写为:当是梯度(一阶导数的向量)的函数在上时,且是 Hessian矩阵(第二衍生矩阵)在函数属于上时。3、2找到最大值的变量的二阶多项式 考虑 当是一个标量,和是关于K-向量,且是一个的对称矩阵,负正定矩阵。这的梯度在上表示为: 我们知道,由于最大化的值满足能最大化的梯度将是一个零矢量, 求解 我们找出结果如: 由于被认为是负定的,而且我们知道这就是最大的。3、3在维度的牛顿拉夫森算法假设我们要找出的最大限度地提高二次连续可微函数。记得 当且。请注意矩阵将是对称的,这就意味着是:再一次,最大值的一阶条件就是:这就意味着:换句话说就是,向量能最大化的在的二阶泰勒近似值为函数: 考虑到这一点,我们就可以指定的k维函数优化问题的牛顿拉夫森算法。算法研究3、1:牛顿拉夫森算法的KD(,公差)发表评论:求关于的值的的最大函数。当Do回到()。译文:摘自: The Newton Raphson Algorithm for FunctionOptimizationKevin QuinnAssistant ProfessorDepartment of Political Science andThe Center for Statistics and the Social SciencesBox 354322, Padelford HallUniversity of WashingtonSeattle, WA 98195-4322October 25, 20011 Introduction Throughout this course we will be interested in calculating maximum likelihood estimate(MLEs). Such estimates are often extremely complicated nonlinear functions of the observed data. As a result, closed form expressions for the MLEs will generally not exist for the models we are working with. The Newton Raphson algorithm is an iterative procedure that can be used to calculate MLEs. The basic idea behind the algorithm is the following. First, construct a quadratic approximation to the function of interest around some initial parameter value (hopefully close to the MLE). Next, adjust the parameter value to that which maximizes the quadratic approximation. This procedure is iterated until the parameter values stabilize. These notes begin with the easy to visualize case of maximizing a function of one variable. After this case is developed, we turn to the more general case of maximizing a function of variables. 2 The Newton Raphson Algorithm for Finding the Maximum of a Function of 1 Variable 2.1 Taylor Series Approximations The first part of developing the Newton Raphson algorithm is to devise a way to approximate the likelihood function with a function that can be easily maximized analytically. To do this we need to make use of Taylors Theorem. Theorem 1 (Taylors Theorem (1 Dimension). Suppose the function f is times differentiable on an open interval I. For any points and in I there exists a point between and such that. (1)It can be shown that as goes to the higher order terms in equation go to 0 much faster than goes to . This means that (for small values of )This is referred to as a first order Taylor approximation of f at . A more accurate approximation to can be constructed for small values of as:This is known as a second order Taylor approximation of f at Note that the first order Taylor approximation can be rewritten as: where and . This highlights the fact that the first order Taylor approximation is a linear function in . Similarly, the second order Taylor approximation can be rewritten as:Where , and . This highlights the fact that the second order Taylor approximation is a second order polynomial in 2.2 Finding the Maximum of a Second Order Polynomial Suppose we want to find the value of that maximizesFirst, we calculate the first derivative of :We know that , where is the value of at which f attains its maximum. In other words, we know thatSolving for we find that . The second order condition is . This implies that will be a maximum whenever .2.3 The Newton Raphson AlgorithmSuppose we want to find the value of that maximizes some twice continuously differentiable function . Recallwhere , , and . This implies.The first order condition for the value of (denoted ) that maximizes isWhich implies . In other words, the value that maximizes the second order Taylor approximation to at is With this in mind we can specify the Newton Raphson algorithm for dimensional function optimization. Algorithm 2.1: NewtonRaphson1D(,tolerance) comment: Find the value of that maximizes While do return Caution: Note that the Newton Raphson Algorithm doesnt check the second order conditions necessary for to be a maximizer. This means that if you give the algorithm a bad starting value for you may end up with a min rather than a max.2.4 Example: Calculating the MLE of a Binomial Sampling ModelTo see how the Newton Raphson algorithm works in practice lets look at a simple example with an analytical solution a simple model of binomial sampling.Our log-likelihood function is:where is the sample size, is the number of successes, and is the probability of a success.The first derivative of the log-likelihood function isand the second derivative of the log-likelihood function isAnalytically, we know that the MLE is .For the sake of example, suppose and . Analytically, we know that the MLE is Lets see how the Newton Raphson algorithm works in this situation.We begin by setting a tolerance level. In this case, lets set it to (In practice you probably want something closer to ). Next we make an initial guess (denoted ) as to the MLE. Suppose . which is larger in absolute value than our tolerance of . Thus we set.Now we calculate which is still larger in absolute value than our tolerance of . Thus we set is approximately equal to which is smaller in absolute value than our tolerance of so we can stop. The Newton Raphson algorithm here returns a value of pi equal to which is reasonably close to the analytical value of . Note we can make the Newton Raphson procedure more accurate (within machine precision) by setting the tolerance level closer to .3 The Newton Raphson Algorithm for Finding theMaximum of a Function of Variables3.1 Taylor Series Approximations in DimensionsConsider a function that is at least twice continuously differentiable. Suppose and . Then the first order Taylor approximation to at is given byand the second order Taylor approximation to f at is given bywhere is the gradient (vector of first derivatives) at , and is the Hessian (matrix of second derivatives) of at .3.2 Finding the Maximum of a Second Order Polynomial in VariablesConsiderwhere is a scalar, and are k-vectors, and is a symmetric, negative definite matrix. The gradient of at isSince the gradient at the value that maximizes will be a vector of zeros we know that the maximizer satisfiesSolving for we find thatSince is assumed to be negative definite we know that this is a maximum.3.3 The Newton Raphson Algorithm in k DimensionsSuppose we want to find the that maximizes the twice continuously differentiable function Recallwhere and . Note that will be symmetric. This impliesOnce again, the first order condition for a maximum iswhich implies thatIn other words, the vector that maximizes the second order Taylor approximation to at isWith this in mind we can specify the Newton Raphson algorithm for k-dimensional function optimization.Algorithm 3.1: NewtonRaphsonKD comment: Find the value of that maximizes While do Return()

    注意事项

    本文(毕业设计论文 外文文献翻译 数学专业.doc)为本站会员(laozhun)主动上传,三一办公仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三一办公(点击联系客服),我们立即给予删除!

    温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




    备案号:宁ICP备20000045号-2

    经营许可证:宁B2-20210002

    宁公网安备 64010402000987号

    三一办公
    收起
    展开