Bootstrap Hypothesis Testing for Quantiles

Bootstrapping

Bootstraping is a statistical method that resamples the initial dataset (with replacement) many times in order to create multiple simulated datasets. Consequently, these simulated datasets do not only allow to calculate different statistics, such as standard errors or confidence intervals, but also to perform hypothesis testing. ## Assumptions for one-sided hypothesis tests on quantile value The goal is to create a one-sided bootstrap hypothesis testing procedure on the quantile value of a random variable \(X\). More precisely, we aim to test \[ H_0: q_X(\alpha) \geq q \quad \text{ against } \quad H_1:q_X(\alpha) < q \]

Definition 1 Let \(X\) be a random variable. We say that \(x_\alpha\) is a \(\alpha\)-quantile (with \(\alpha \in ]0.1[\)) if it verifies the following two inequalities: \[ \begin{cases} \mathbb{P}(X < x_\alpha) \leq \alpha \\ \mathbb{P}(X \leq x_\alpha) \geq \alpha \end{cases} \]

In the general case there is often more than a single value for a given \(\alpha\) that verifies these two inequalities. One can, on the other hand, determine an interval that contains all these values : \[ [q_X^-(\alpha), q_X^+(\alpha)] \] where \[ \boxed{q_X^-(\alpha) = \inf\{x \in \mathbb{R} \quad \text{t.q.} \quad \mathbb{P}(X \leq x)\geq\alpha\}}\] \[ \boxed{q_X^+(\alpha) = \sup\{x \in \mathbb{R} \quad \text{t.q.} \quad \mathbb{P}(X < x)\leq\alpha\}}\]

Remark. Usually, the quantile function is defined as \(q_X^-(\alpha)\). From now on we will note, however, that \(q_X(\alpha) = q_X^+(\alpha)\).
Definition 2 Let \(X\) be a random variable. For any \(\alpha \in ]0,1[\) \[ q_X(\alpha) = q_X^+(\alpha) = \sup\{x \in \mathbb{R} \quad \text{t.q.} \quad \mathbb{P}(X < x)\leq\alpha\} \]

quant = function(x, alpha){
	x = sort(x)
	q = c()
	for (i in 1:length(x)){
	q[i] = sum(x<x[i]) / length(x)
	}
	return(max(x[q <= alpha]))
}
	

In order to carry out the hypothesis test we must first choose and define a test statistic, which we will note \(T_{x_\alpha}\).

Let \(X_1,\cdots,X_n\) be a sample from a random variable X.

Definition 3 \[ T_{q} = \sum_{i=1}^{n} \mathbb{1}_{]-\infty,q[}(X_i) \]

In other words, \(T_{q}\) therefore represent the number of observations taking a smaller value than \(q\).


# Test statistic
T_stat = function(X, q){
	sum(X<q)
}
	

In other words, under \(H_0\), for a sample of size \(n\), we find that \[ \frac{n-T_{q}}{n} \geq \alpha \iff T_{q} \geq n\times\alpha \]

Bootstrap hypothesis testing procedure

Before performing the hypothesis test, we must, by all means, ensure that the empiricical distribution of the sample verifies \(H_0\). Otherwise the results are going to be incorrect. In case where the the empirical sample distribution does not verifiy \(H_0\), we substract the empirical quantile of interest and add \(q\) to every single sample value. This will ensure that the empirical sample distribution verifies \(H_0\).

We start by generating \(B\) bootstrap samples \(X^{*i}=(X_1^{*i},\dots,X_{n}^{*i})\). Using the bootstrap samples we can then calculate the test statistic for each sample. \[T_{q}^{*i} = \sum_{j=1}^{n} \mathbb{1}_{]-\infty,q[}(X_j^{*i}) \qquad (i=1,\dots,B)\] and then determine the empirical p-value \[ \widehat{pval} = \frac{1}{B} \sum_{i=1}^{B} \mathbb{1}_{\{ T_{q}^{*i} > T_{q}\}} \] by comparing the \(T_{q}^{*i}\) to the value of \(T_{q}\), the statistic obtained for the initial sample.


bootstrap_quantile_test = function(x, alpha, q, B, sign_level){
# Test statistic for initial sample
T = T_stat(x,q)

# Sample under H_0
if (T<length(x)*alpha){ H0_sample = x - quant(x,alpha) + q }
if (T>=length(x)*alpha){ H0_sample = x }

# Generating bootstrap samples and computation of test statistics
T_boot=c()
for (i in 1:B){
boot_sample = sample(x = H0_sample,
					size = length(x),
					replace = TRUE)
T_boot = c(T_boot, T_stat(boot_sample, q))
}

# Determinating rejection area
pvals = c() # p_values
for (i in 0:length(x)){
pvals = c(pvals,mean(T_boot>i))
} 
names(pvals) =0:length(x)

return(list(pval = mean(T_boot > T),
			rejection_limit = as.numeric(names(which.max(pvals[pvals<sign_level])))))
}
	

Example


sample = runif(1000, 0,100)

( results = bootstrap_quantile_test(x=sample, alpha=0.3, q=50, B=10000, sign_level=0.05) )
## $pval
## [1] 0.4772
## 
## $rejection_limit
## [1] 506
	

We find that the empirical p-value is \[ \widehat{pval} = 0.4772 \] So, for a test with a significance level of \(0.05\) we do not reject \(H_0\).

Furthermore, we find that the emprirical rejection region for \(T_q\) is \[ [506 ; +\infty ] \]