Distributed Acoustic Sensing (DAS)
I have an iDAS (intelligent distributed acoustic sensing) dataset obtain from an undersea optical fibre. iDAS data have a 2D dimensional representation. On the one axis we have the channel axis, i.e. the point on the cable from which we measure the strain rate obtained from the backscatter light (Rayleigh Backscatter) on that point and on the other axis we have the sampling points obtained with fixed frequency in time. Therefore, iDAS data have both spatial and temporal information. Another way to think of this is by looking a particular channel, then, for this fixed channel we obtain a signal which measures the strain rate of the cable with respect to time.
Motivation
This technology can be used in various applications, e.g. earthquake detection (see [1] and this video fro example), for detecting volcanic events [3] and many others. However, a big challenge on these datasets is to alleviate the noise that might occur from irrelevant events. My aim is approach this problem via a Self-Supervised Deep learning approach. There are a some papers in the literature addressing this approach such as [4]. I have verified the approach in [4] on the datasets that the authors use and works also in some other cases. However, I would like to improve the results on a specific dataset.
Question
Therefore, I would be very pleased if anyone can provide any references, ideas or approaches (e.g. different architectures) for this problem. One idea is to approach to this problem via Vision Transformers, e.g. similar to [5]. Also, papers related to signal denoising via Self Supervised techniques might also provide valuable information related to the problem.
References
[1] Distributed acoustic sensing of microseismic sources and wave propagation in glaciated terrain.
[2] Fiber Optic Seismology In Theory And Practice (Video Webinar on YouTube).
[3] Fibre optic distributed acoustic sensing of volcanic events.
[4] A Self-Supervised Deep Learning Approach for Blind Denoising and Waveform Coherence Enhancement in Distributed Acoustic Sensing Data.
[5] Masked Autoencoders Are Scalable Vision Learners.