Skip Navigation

probability theory question: mean subsequent block time

I'm trying to arrive at the function that describes the following, but can't quite figure it out for multiple blocks. (there are some useful insights for a single block here.)

assume the Poisson point process that is the arrival of proof-of-work blocks on Monero. the mean of the block times is the target block time t (120 seconds).

also assume n subsequent blocks.

also assume p, the probability of n subsequent blocks having a mean block time less than or equal to T.

given t, n and p, how can I calculate T?

7 comments
  • If I understand your question right, I think you're looking for the inverse cumulative distribution function (a.k.a. quantile function) of the Erlang distribution.

    The random length of time to mine the next block has an exponential distribution with rate parameter 1/t. The length of time to mine n blocks has an Erlang distribution with shape parameter n and rate parameter 1/t.

    The Erlang distribution is a special case of the Gamma distribution. The Erlang distribution's shape parameter must be an integer, but the Gamma distribution's shape parameter can be any positive real number. We can use the Gamma distribution if Erlang isn't given to us by our calculator.

    You would compute T in the R language with:

     R
        
    qgamma(p = p, shape = n, rate = 1/t)/n
    
      

    The results of this simulation match the closed-form computation:

     R
        
    t <- 120
    n <- 15
    p <- 0.4
    
    set.seed(314)
    
    mining.times <- matrix(rexp(n * 100000, rate = 1/t), ncol = n)
    
    mining.times <- rowSums(mining.times)
    
    quantile(mining.times/n, probs = p)
    
    qgamma(p = p, shape = n, rate = 1/t)/n
    # Divide by n to get the mean instead of the total
    
    
      
    • thank you very much, Rucknium. your understanding of my question was spot-on, and the R code works excellently! very useful.

      I'd like to ask a few more questions:

      • is 314 a common seed in R, or just something you randomly picked?
      • in statistics in general, are there cases where n * 100000 random samples (any distribution) would be insufficient? is it a good rule of thumb?
7 comments