Mines are a pretty noisy place: manmade sources such as drills, jackhammers, trucks, blasts, etc. and the rock's response to all this activity, unexpected microseismic events. As a mine matures, the stress field changes to accommodate the ore being carved out, and can cause previously stable faults to reactivate, slip and possibly result in catastrophic consequences. So to be able to predict, or at least catch the warning signs of this risk, has both economic and life-safety incentives.
Numeric modelling can happily predict the wavefield propagating through a mine, and its evolution within the working mine. From this, a method to detect the changing stress field can be developed. But can we actually collect adequate data in a real mine to satisfy these theoretical methods? Already deployed microseismic arrays are sparse and have very poor correlation, even for close separations. The useable data extracted are limited to first breaks, giving event location and size, but based on a pre-set velocity model. However, for sampling at the scale required of numeric models, it is far from adequate.
This research looks to deploy a dense seismic array, along the mine's tunnel surfaces, to capture the wavefield's progression and evolution. But first results are not encouraging. or are they? Hammer seismic tests with tunnel-face mounted sensors show little correlation, even at 2m spacings. Is this a problem with the apparatus: mounts, acquisition hardware, etc.? Or, is it that the attached rock's small-scale complexities have such drastic impact? What needs to be done to resolve these questions? This is where the research sits right now.