shifted exponential distribution method of moments


The mean of the distribution is \(\mu = 1 / p\). laudantium assumenda nam eaque, excepturi, soluta, perspiciatis cupiditate sapiente, adipisci quaerat odio The method of moments Early in the development of statistics, the moments of a distribution (mean, variance, skewness, kurtosis) were discussed in depth, and estimators were formulated by equating the sample moments (i.e., x;s2;:::) to the corresponding population moments, which are functions of the parameters. . Why does Acts not mention the deaths of Peter and Paul? Suppose that \( k \) is known but \( p \) is unknown. If \(k\) is known, then the method of moments equation for \(V_k\) is \(k V_k = M\). Given a collection of data that may fit the exponential distribution, we would like to estimate the parameter which best fits the data. A simply supported beam AB carries a uniformly distributed load of 2 kips/ft over its length and a concentrated load of 10 kips in the middle of its span, as shown in Figure 7.3a.Using the method of double integration, determine the slope at support A and the deflection at a midpoint C of the beam.. The distribution of \(X\) has \(k\) unknown real-valued parameters, or equivalently, a parameter vector \(\bs{\theta} = (\theta_1, \theta_2, \ldots, \theta_k)\) taking values in a parameter space, a subset of \( \R^k \). Then \[U = \frac{M \left(M - M^{(2)}\right)}{M^{(2)} - M^2}, \quad V = \frac{(1 - M)\left(M - M^{(2)}\right)}{M^{(2)} - M^2}\]. First, assume that \( \mu \) is known so that \( W_n \) is the method of moments estimator of \( \sigma \). The normal distribution is studied in more detail in the chapter on Special Distributions. Note that we are emphasizing the dependence of the sample moments on the sample \(\bs{X}\). Hence the equations \( \mu(U_n, V_n) = M_n \), \( \sigma^2(U_n, V_n) = T_n^2 \) are equivalent to the equations \( \mu(U_n, V_n) = M_n \), \( \mu^{(2)}(U_n, V_n) = M_n^{(2)} \). Short story about swapping bodies as a job; the person who hires the main character misuses his body. Equate the first sample moment about the origin \(M_1=\dfrac{1}{n}\sum\limits_{i=1}^n X_i=\bar{X}\) to the first theoretical moment \(E(X)\). Improving the copy in the close modal and post notices - 2023 edition, New blog post from our CEO Prashanth: Community is the future of AI, Calculating method of moments estimators for exponential random variables. Compare the empirical bias and mean square error of \(S^2\) and of \(T^2\) to their theoretical values. The rst population moment does not depend on the unknown parameter , so it cannot be used to . MIP Model with relaxed integer constraints takes longer to solve than normal model, why? First, let \[ \mu^{(j)}(\bs{\theta}) = \E\left(X^j\right), \quad j \in \N_+ \] so that \(\mu^{(j)}(\bs{\theta})\) is the \(j\)th moment of \(X\) about 0. Note that the mean \( \mu \) of the symmetric distribution is \( \frac{1}{2} \), independently of \( c \), and so the first equation in the method of moments is useless. $$ Finding the maximum likelihood estimators for this shifted exponential PDF? Therefore, we need just one equation. Suppose that \(b\) is unknown, but \(k\) is known. 16 0 obj Ask Question Asked 5 years, 6 months ago Modified 5 years, 6 months ago Viewed 4k times 3 I have f , ( y) = e ( y ), y , > 0. If Y has the usual exponential distribution with mean , then Y+ has the above distribution. Suppose now that \( \bs{X} = (X_1, X_2, \ldots, X_n) \) is a random sample of size \( n \) from the Poisson distribution with parameter \( r \). Suppose that \( \bs{X} = (X_1, X_2, \ldots, X_n) \) is a random sample from the symmetric beta distribution, in which the left and right parameters are equal to an unknown value \( c \in (0, \infty) \). Suppose that \(a\) is unknown, but \(b\) is known. This distribution is called the two-parameter exponential distribution, or the shifted exponential distribution. It also follows that if both \( \mu \) and \( \sigma^2 \) are unknown, then the method of moments estimator of the standard deviation \( \sigma \) is \( T = \sqrt{T^2} \). xR=O0+nt>{EPJ-CNI M%y This is a shifted exponential distri-bution. (a) Find the mean and variance of the above pdf. Note the empirical bias and mean square error of the estimators \(U\) and \(V\). The first limit is simple, since the coefficients of \( \sigma_4 \) and \( \sigma^4 \) in \( \mse(T_n^2) \) are asymptotically \( 1 / n \) as \( n \to \infty \). 1-E{=atR[FbY$ Yk8bVP*Pn Suppose that \(a\) and \(b\) are both unknown, and let \(U\) and \(V\) be the corresponding method of moments estimators. PDF Estimation of Parameters of Some Continuous Distribution Functions If we had a video livestream of a clock being sent to Mars, what would we see? The Poisson distribution is studied in more detail in the chapter on the Poisson Process. PDF Solution to Problem 8.16 8.16. - University of British Columbia /]tIxP Uq;P? Thus, by Basu's Theorem, we have that Xis independent of X (2) X (1). endobj The following problem gives a distribution with just one parameter but the second moment equation from the method of moments is needed to derive an estimator. Solving gives the result. xSo/OiFxi@2(~z+zs/./?tAZR $q!}E=+ax{"[Y }rs Www00!>sz@]G]$fre7joqrbd813V0Q3=V*|wvWo__?Spz1Q#gC881YdXY. We just need to put a hat (^) on the parameters to make it clear that they are estimators. Hence for data X 1;:::;X n IIDExponential( ), we estimate by the value ^ which satis es 1 ^ = X , i.e. Moments Method: Exponential | Real Statistics Using Excel ;P `h>\"%[l,}*KO.9S"p:,q_vVBIr(DUz|S]l'[B?e<4#]ph/Ny(?K8EiAJ)x+g04 You'll get a detailed solution from a subject matter expert that helps you learn core concepts. What differentiates living as mere roommates from living in a marriage-like relationship? If total energies differ across different software, how do I decide which software to use? :2z"QH`D1o BY,! H3U=JbbZz*Jjw'@_iHBH} jT;@7SL{o{Lo!7JlBSBq\4F{xryJ}_YC,e:QyfBF,Oz,S#,~(Q QQX81-xk.eF@:%'qwK\Qa!|_]y"6awwmrs=P.Oz+/6m2n3A?ieGVFXYd.K/%K-~]ha?nxzj7.KFUG[bWn/"\e7`xE _B>n9||Ky8h#z\7a|Iz[kM\m7mP*9.v}UC71lX.a FFJnu K| Passing negative parameters to a wolframscript. \( \E(V_a) = h \) so \( V \) is unbiased. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. \( \var(M_n) = \sigma^2/n \) for \( n \in \N_+ \)so \( \bs M = (M_1, M_2, \ldots) \) is consistent. Let \(U_b\) be the method of moments estimator of \(a\). Did I get this one? EMG; Probability density function. "Signpost" puzzle from Tatham's collection. }, \quad x \in \N \] The mean and variance are both \( r \). 28 0 obj What's the cheapest way to buy out a sibling's share of our parents house if I have no cash and want to pay less than the appraised value? Since the mean of the distribution is \( p \), it follows from our general work above that the method of moments estimator of \( p \) is \( M \), the sample mean. Let X1, X2, , Xn iid from a population with pdf. Answer (1 of 2): If we shift the origin of the variable following exponential distribution, then it's distribution will be called as shifted exponential distribution. The parameter \( r \), the type 1 size, is a nonnegative integer with \( r \le N \). Now solve for $\bar{y}$, $$E[Y] = \frac{1}{n}\sum_\limits{i=1}^{n} y_i \\ PDF The moment method and exponential families - Stanford University The following sequence, defined in terms of the gamma function turns out to be important in the analysis of all three estimators. Exponential Distribution (Definition, Formula, Mean & Variance First, let ( j) () = E(Xj), j N + so that ( j) () is the j th moment of X about 0. Let's return to the example in which \(X_1, X_2, \ldots, X_n\) are normal random variables with mean \(\mu\) and variance \(\sigma^2\). Let \(V_a\) be the method of moments estimator of \(b\). In light of the previous remarks, we just have to prove one of these limits. endstream Recall that we could make use of MGFs (moment generating . endstream Run the normal estimation experiment 1000 times for several values of the sample size \(n\) and the parameters \(\mu\) and \(\sigma\). Our goal is to see how the comparisons above simplify for the normal distribution. Two MacBook Pro with same model number (A1286) but different year, Using an Ohm Meter to test for bonding of a subpanel. where and are unknown parameters. Recall that \(U^2 = n W^2 / \sigma^2 \) has the chi-square distribution with \( n \) degrees of freedom, and hence \( U \) has the chi distribution with \( n \) degrees of freedom. If \(b\) is known then the method of moment equation for \(U_b\) as an estimator of \(a\) is \(b U_b \big/ (U_b - 1) = M\). Boolean algebra of the lattice of subspaces of a vector space? 8.16. a) For the double exponential probability density function f(xj) = 1 2 exp jxj ; the rst population moment, the expected value of X, is given by E(X) = Z 1 1 x 2 exp jxj dx= 0 because the integrand is an odd function (g( x) = g(x)). This fact has led many people to study the properties of the exponential distribution family and to propose various estimation techniques (method of moments, mixed moments, maximum likelihood etc. The mean of the distribution is \( k (1 - p) \big/ p \) and the variance is \( k (1 - p) \big/ p^2 \). PDF Delta Method - Western University Next, let \[ M^{(j)}(\bs{X}) = \frac{1}{n} \sum_{i=1}^n X_i^j, \quad j \in \N_+ \] so that \(M^{(j)}(\bs{X})\) is the \(j\)th sample moment about 0. Y%I9R)5B|pCf-Y" N-q3wJ!JZ6X$0YEHop1R@,xLwxmMz6L0n~b1`WP|9A4. qo I47m(fRN-x^+)N Iq`~u'rOp+ `q] o}.5(0C Or 1@ endstream The results follow easily from the previous theorem since \( T_n = \sqrt{\frac{n - 1}{n}} S_n \). Double Exponential Distribution | Derivation of Mean - YouTube The method of moments equation for \(U\) is \((1 - U) \big/ U = M\). Normal distribution X N( ;2) has d (x) = exp(x2 22 1 log(22)), A( ) = 1 2 2 2, T(x) = 1 x. There is no simple, general relationship between \( \mse(T_n^2) \) and \( \mse(S_n^2) \) or between \( \mse(T_n^2) \) and \( \mse(W_n^2) \), but the asymptotic relationship is simple. The geometric distribution on \(\N_+\) with success parameter \(p \in (0, 1)\) has probability density function \( g \) given by \[ g(x) = p (1 - p)^{x-1}, \quad x \in \N_+ \] The geometric distribution on \( \N_+ \) governs the number of trials needed to get the first success in a sequence of Bernoulli trials with success parameter \( p \). stream These are the basic parameters, and typically one or both is unknown. /Length 997 Of course the asymptotic relative efficiency is still 1, from our previous theorem. In fact, if the sampling is with replacement, the Bernoulli trials model would apply rather than the hypergeometric model. Solved Assume a shifted exponential distribution, given - Chegg Instead, we can investigate the bias and mean square error empirically, through a simulation. Estimating the mean and variance of a distribution are the simplest applications of the method of moments. 70 0 obj For \( n \in \N_+ \), \( \bs X_n = (X_1, X_2, \ldots, X_n) \) is a random sample of size \( n \) from the distribution. Again, the resulting values are called method of moments estimators. So, the first moment, or , is just E(X) E ( X), as we know, and the second moment, or 2 2, is E(X2) E ( X 2). yWJJH6[V8QwbDOz2i$H4 (}Vi k>[@nZC46ah:*Ty= e7:eCS,$o#)T$\ E.bE#p^Xf!i#%UsgTdQ!cds1@)V1z,hV|}[noy~6-Ln*9E0z>eQgKI5HVbQc"(**a/90rJAA8H.4+/U(C9\x*vXuC>R!:MpP>==zzh*5@4")|_9\Q&!b[\)jHaUnn1>Xcq#iu@\M. S0=O)j Wdsb/VJD Thus, \(S^2\) and \(T^2\) are multiplies of one another; \(S^2\) is unbiased, but when the sampling distribution is normal, \(T^2\) has smaller mean square error. Estimating the variance of the distribution, on the other hand, depends on whether the distribution mean \( \mu \) is known or unknown. Let \( M_n \), \( M_n^{(2)} \), and \( T_n^2 \) denote the sample mean, second-order sample mean, and biased sample variance corresponding to \( \bs X_n \), and let \( \mu(a, b) \), \( \mu^{(2)}(a, b) \), and \( \sigma^2(a, b) \) denote the mean, second-order mean, and variance of the distribution. The first population or distribution moment mu one is the expected value of X. "Signpost" puzzle from Tatham's collection. 1 = E ( Y) = + 1 = Y = m 1 where m is the sample moment. Equivalently, \(M^{(j)}(\bs{X})\) is the sample mean for the random sample \(\left(X_1^j, X_2^j, \ldots, X_n^j\right)\) from the distribution of \(X^j\). i4cF#k(qJR`9k@O7, #daUE/h2d`u *>-L w?};:8`4/@Fc8|\.jX(EYM`zXhejfWlTR0JN8B(|ZE; It's not them. PDF TWO-MOMENT APPROXIMATIONS FOR MAXIMA - Columbia University Method of Moments: Exponential Distribution. PDF APPM/MATH 4/5520 ExamII Review Problems OptionalExtraReviewSession Now, we just have to solve for the two parameters \(\alpha\) and \(\theta\). There is a small problem in your notation, as $\mu_1 =\overline Y$ does not hold. The method of moments estimators of \(a\) and \(b\) given in the previous exercise are complicated nonlinear functions of the sample moments \(M\) and \(M^{(2)}\). Our work is done! By clicking Accept all cookies, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. %PDF-1.5 The (continuous) uniform distribution with location parameter \( a \in \R \) and scale parameter \( h \in (0, \infty) \) has probability density function \( g \) given by \[ g(x) = \frac{1}{h}, \quad x \in [a, a + h] \] The distribution models a point chosen at random from the interval \( [a, a + h] \). Notice that the joint pdf belongs to the exponential family, so that the minimal statistic for is given by T(X,Y) m j=1 X2 j, n i=1 Y2 i, m j=1 X , n i=1 Y i. We just need to put a hat (^) on the parameter to make it clear that it is an estimator. How to find estimator for shifted exponential distribution using method of moment? And, substituting that value of \(\theta\)back into the equation we have for \(\alpha\), and putting on its hat, we get that the method of moment estimator for \(\alpha\) is: \(\hat{\alpha}_{MM}=\dfrac{\bar{X}}{\hat{\theta}_{MM}}=\dfrac{\bar{X}}{(1/n\bar{X})\sum\limits_{i=1}^n (X_i-\bar{X})^2}=\dfrac{n\bar{X}^2}{\sum\limits_{i=1}^n (X_i-\bar{X})^2}\). Connect and share knowledge within a single location that is structured and easy to search. By clicking Accept all cookies, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. \( \mse(T_n^2) / \mse(W_n^2) \to 1 \) and \( \mse(T_n^2) / \mse(S_n^2) \to 1 \) as \( n \to \infty \). 56 0 obj Part (c) follows from (a) and (b). << Consider m random samples which are independently drawn from m shifted exponential distributions, with respective location parameters 1 , 2 ,, m , and common scale parameter . X Notes The probability density function for expon is: f ( x) = exp ( x) for x 0. Why refined oil is cheaper than cold press oil? $$E[Y] = \int_{0}^{\infty}y\lambda e^{-y}dy \\ Solving gives \[ W = \frac{\sigma}{\sqrt{n}} U \] From the formulas for the mean and variance of the chi distribution we have \begin{align*} \E(W) & = \frac{\sigma}{\sqrt{n}} \E(U) = \frac{\sigma}{\sqrt{n}} \sqrt{2} \frac{\Gamma[(n + 1) / 2)}{\Gamma(n / 2)} = \sigma a_n \\ \var(W) & = \frac{\sigma^2}{n} \var(U) = \frac{\sigma^2}{n}\left\{n - [\E(U)]^2\right\} = \sigma^2\left(1 - a_n^2\right) \end{align*}. We have suppressed this so far, to keep the notation simple. Run the beta estimation experiment 1000 times for several different values of the sample size \(n\) and the parameters \(a\) and \(b\). Proving that this is a method of moments estimator for $Var(X)$ for $X\sim Geo(p)$. Suppose that \(k\) is unknown, but \(b\) is known. PDF Stat 411 { Lecture Notes 03 Likelihood and Maximum Likelihood Estimation endobj Thus \( W \) is negatively biased as an estimator of \( \sigma \) but asymptotically unbiased and consistent. Equating the first theoretical moment about the origin with the corresponding sample moment, we get: \(E(X)=\mu=\dfrac{1}{n}\sum\limits_{i=1}^n X_i\). Again, since we have two parameters for which we are trying to derive method of moments estimators, we need two equations. I find the MOM estimator for the exponential, Poisson and normal distributions. >> Solved How to find an estimator for shifted exponential - Chegg We know for this distribution, this is one over lambda. Consider the sequence \[ a_n = \sqrt{\frac{2}{n}} \frac{\Gamma[(n + 1) / 2)}{\Gamma(n / 2)}, \quad n \in \N_+ \] Then \( 0 \lt a_n \lt 1 \) for \( n \in \N_+ \) and \( a_n \uparrow 1 \) as \( n \uparrow \infty \). To setup the notation, suppose that a distribution on \( \R \) has parameters \( a \) and \( b \). The best answers are voted up and rise to the top, Not the answer you're looking for? PDF HW-Sol-5-V1 - Massachusetts Institute of Technology The method of moments estimator of \(p\) is \[U = \frac{1}{M + 1}\]. They all have pure-exponential tails. For \( n \in \N_+ \), the method of moments estimator of \(\sigma^2\) based on \( \bs X_n \) is \[T_n^2 = \frac{1}{n} \sum_{i=1}^n (X_i - M_n)^2\]. \( \var(U_p) = \frac{k}{n (1 - p)} \) so \( U_p \) is consistent. Lesson 2: Confidence Intervals for One Mean, Lesson 3: Confidence Intervals for Two Means, Lesson 4: Confidence Intervals for Variances, Lesson 5: Confidence Intervals for Proportions, 6.2 - Estimating a Proportion for a Large Population, 6.3 - Estimating a Proportion for a Small, Finite Population, 7.5 - Confidence Intervals for Regression Parameters, 7.6 - Using Minitab to Lighten the Workload, 8.1 - A Confidence Interval for the Mean of Y, 8.3 - Using Minitab to Lighten the Workload, 10.1 - Z-Test: When Population Variance is Known, 10.2 - T-Test: When Population Variance is Unknown, Lesson 11: Tests of the Equality of Two Means, 11.1 - When Population Variances Are Equal, 11.2 - When Population Variances Are Not Equal, Lesson 13: One-Factor Analysis of Variance, Lesson 14: Two-Factor Analysis of Variance, Lesson 15: Tests Concerning Regression and Correlation, 15.3 - An Approximate Confidence Interval for Rho, Lesson 16: Chi-Square Goodness-of-Fit Tests, 16.5 - Using Minitab to Lighten the Workload, Lesson 19: Distribution-Free Confidence Intervals for Percentiles, 20.2 - The Wilcoxon Signed Rank Test for a Median, Lesson 21: Run Test and Test for Randomness, Lesson 22: Kolmogorov-Smirnov Goodness-of-Fit Test, Lesson 23: Probability, Estimation, and Concepts, Lesson 28: Choosing Appropriate Statistical Methods, Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris, Duis aute irure dolor in reprehenderit in voluptate, Excepteur sint occaecat cupidatat non proident, \(E(X^k)\) is the \(k^{th}\) (theoretical) moment of the distribution (, \(E\left[(X-\mu)^k\right]\) is the \(k^{th}\) (theoretical) moment of the distribution (, \(M_k=\dfrac{1}{n}\sum\limits_{i=1}^n X_i^k\) is the \(k^{th}\) sample moment, for \(k=1, 2, \ldots\), \(M_k^\ast =\dfrac{1}{n}\sum\limits_{i=1}^n (X_i-\bar{X})^k\) is the \(k^{th}\) sample moment about the mean, for \(k=1, 2, \ldots\). The first theoretical moment about the origin is: And the second theoretical moment about the mean is: \(\text{Var}(X_i)=E\left[(X_i-\mu)^2\right]=\alpha\theta^2\). In Figure 1 we see that the log-likelihood attens out, so there is an entire interval where the likelihood equation is The method of moments estimator of \( p = r / N \) is \( M = Y / n \), the sample mean. Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site \[ \bs{X} = (X_1, X_2, \ldots, X_n) \] Thus, \(\bs{X}\) is a sequence of independent random variables, each with the distribution of \(X\). It only takes a minute to sign up. mZ7C'.SH"A$r>z^D`YM_jZD(@NCI% E(se7_5@' #7IH SjAQi! Suppose now that \( \bs{X} = (X_1, X_2, \ldots, X_n) \) is a random sample of size \( n \) from the uniform distribution. The idea behind method of moments estimators is to equate the two and solve for the unknown parameter. Next let's consider the usually unrealistic (but mathematically interesting) case where the mean is known, but not the variance. The moment method and exponential families John Duchi Stats 300b { Winter Quarter 2021 Moment method 4{1. Accessibility StatementFor more information contact us atinfo@libretexts.org. Solving gives the result. The method of moments estimator of \( k \) is \[ U_p = \frac{p}{1 - p} M \]. Cumulative distribution function. Suppose now that \(\bs{X} = (X_1, X_2, \ldots, X_n)\) is a random sample of size \(n\) from the negative binomial distribution on \( \N \) with shape parameter \( k \) and success parameter \( p \), If \( k \) and \( p \) are unknown, then the corresponding method of moments estimators \( U \) and \( V \) are \[ U = \frac{M^2}{T^2 - M}, \quad V = \frac{M}{T^2} \], Matching the distribution mean and variance to the sample mean and variance gives the equations \[ U \frac{1 - V}{V} = M, \quad U \frac{1 - V}{V^2} = T^2 \]. $\mu_2-\mu_1^2=Var(Y)=\frac{1}{\theta^2}=(\frac1n \sum Y_i^2)-{\bar{Y}}^2=\frac1n\sum(Y_i-\bar{Y})^2\implies \hat{\theta}=\sqrt{\frac{n}{\sum(Y_i-\bar{Y})^2}}$, Then substitute this result into $\mu_1$, we have $\hat\tau=\bar Y-\sqrt{\frac{\sum(Y_i-\bar{Y})^2}{n}}$. Suppose that \(a\) and \(b\) are both unknown, and let \(U\) and \(V\) be the corresponding method of moments estimators. Suppose that \( a \) and \( h \) are both unknown, and let \( U \) and \( V \) denote the corresponding method of moments estimators. If total energies differ across different software, how do I decide which software to use? Doing so, we get that the method of moments estimator of \(\mu\)is: (which we know, from our previous work, is unbiased). An engineering component has a lifetimeYwhich follows a shifted exponential distri-bution, in particular, the probability density function (pdf) ofY is {e(y ), y > fY(y;) =The unknown parameter >0 measures the magnitude of the shift. Creative Commons Attribution NonCommercial License 4.0. Suppose that the mean \( \mu \) is known and the variance \( \sigma^2 \) unknown. The geometric distribution is considered a discrete version of the exponential distribution. We sample from the distribution of \( X \) to produce a sequence \( \bs X = (X_1, X_2, \ldots) \) of independent variables, each with the distribution of \( X \). Find the maximum likelihood estimator for theta. Because of this result, \( T_n^2 \) is referred to as the biased sample variance to distinguish it from the ordinary (unbiased) sample variance \( S_n^2 \). Mean square errors of \( T^2 \) and \( W^2 \). For illustration, I consider a sample of size n= 10 from the Laplace distribution with = 0. Throughout this subsection, we assume that we have a basic real-valued random variable \( X \) with \( \mu = \E(X) \in \R \) and \( \sigma^2 = \var(X) \in (0, \infty) \). Learn more about Stack Overflow the company, and our products. Note also that, in terms of bias and mean square error, \( S \) with sample size \( n \) behaves like \( W \) with sample size \( n - 1 \). PDF STAT 3202: Practice 03 - GitHub Pages

Jalopy Showdown Lincoln Speedway, Alden Indy Alternative, Girl Biting Lip Meme Girl Name, Gatorland Zipline Death, Articles S

shifted exponential distribution method of moments