/stor-i-student-sites/wanchen-yue Mon, 06 Oct 2025 16:16:15 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 /stor-i-student-sites/wanchen-yue/wp-content/uploads/sites/49/2022/12/LOGO-1-150x150.png /stor-i-student-sites/wanchen-yue 32 32 ‘NOT’ Changepoint Analysis /stor-i-student-sites/wanchen-yue/2023/01/09/not-changepoint-analysis/ Mon, 09 Jan 2023 16:05:40 +0000 /stor-i-student-sites/wanchen-yue/?p=166

When applying a piecewise linear regression model to estimate trend components, Change-Point detection plays an essential role in deciding the points¡¯ locations and numbers. We could find from the previous literature relating to Change-Point detection that most of the existing techniques focus on dealing with scenarios when the targeting function is assumed to be piecewise constant. More general Change-Point detection problems where f(t) is assumed to be a piecewise parametric function(including piecewise linear) have comparatively less attention in this research
area. However, with the increasing need in the analysis of trends in various fields, more generic Change-points detection methods were introduced in the recent decade to enable the researchers to estimate the points of shift of a regression function. Among these methods, I will introduce and explain the Narrowest-over-threshold(NOT) method proposed by Baranowski (2019).

The most important idea adopted in this approach is a combination of ¡®global¡¯ and ¡®local¡¯ analysis of the time series data. At the ¡®global¡¯ stage, we first draw M sub-intervals along the total time span, This could be simply achieved by extracting p uniformly from {0, . . . , T ? 1} and extracting q uniformly from {1, . . . , T }, then M valid sets of p and q satisfying p ? q ¡Ý 2d (since we typically require at least d data to decide a d-dimension parametric vector ¦¨) were drawn and recorded. Next, we calculate the generalized likelihood ratio statistic for all the points (i) within one sub-intervals (p, q]:

The similar calculation of the maximum generalized likelihood ratio then will be conducted among all the M sets of sub-intervals: (p, q)1, (p, q)2, . . . , (p, q)M . In the next ¡®local¡¯ stage, we set a threshold value ¦ËT , compare the R(p,q](Y) with ¦ËT and pick out those significant maximum ratio statistics which are above the threshold value: R(ps,qs](Y). Finally, the sub-interval (ps? , qs? ] leading to a significant ratio statistic R(ps? ,qs? ](Y) with
narrowest length of the interval is chosen, and the point i? corresponding to maximum generalized likelihood ratio statistic is the (first) change point that we aim to locate.The same process will then be conducted to both the left and right side of the previous found change point and the algorithm stops until there exist no significant maximum generalized ratio statistics.

]]>
Gibbs Sampling /stor-i-student-sites/wanchen-yue/2022/11/15/hello-world/ Tue, 15 Nov 2022 09:16:15 +0000 /stor-i-student-sites/wanchen-yue/?p=1

All that mattered was that slowly, by degrees, by left and right then left and right again, he was guiding them towards the destination.

There exist a number of algorithms that can give us a proposal density for our Markov chain¡¯s dynamics. By using the Metropolis-Hastings algorithm we can obtain an acceptance probability for each proposed change of state within the Markov Chain. However, in some cases where we only know the conditional distribution between the variables, Gibbs sampling MCMC algorithm will be more efficient and helpful. 

Gibbs Illustration
]]>