Gravitational Wavess, or GWs, are distortions in space emitted from asymmetric accelerations, and are one of the implications of Einstein’s general relativity. A century after general relativity, the LIGO science collaboration won the Nobel Prize in Physics for the first direct detection of GWs from two intermediate-mass black-holes merging into a single, more massive one. (Here, ‘intermediate mass’ means something around 30 times the mass of the Sun.) Gravitational waves remain an active area of research, and efforts led by LIGO (more on LIGO below) are being performed to detect gravitational waves from mergin neutron stars, pulsars, and other astronomical objects. This project aims to use the LIGO data to check for GWs from the sun, and in the case of no detection, set upper limit on their possible values.
The following animation shows the effect of a gravitational-wave passing through the fabric of spacetime (credit: Hahn, 2013)
GW generating solar dynamics
An essential question is why would we think that the Sun emits GWs that are big enough to be detected?
The answer is that recent theoretical papers show that the magnetohydrodynamic (or MHD) dynamo at the core of the Sun should emit GWs that could leave a strain signal of up to , which LIGO would be able to pick up.
LIGO (or the Laser Interferometer Gravitational-wave Observatory) operates two detectors in Louisiana and Washington. The gravitational-wave detector is a Michelson interferometer, with Fabry-Pérot cavities as its 4km interferometric arms. The laser gets reflected repeatedly making an effective length of approximately 300 km per arm. At rest, the laser from the detectors cancels out due to their phase-difference. When a GW passes through the detector, it stretches space in one direction while compressing it the perpendicular direction, resulting in a phase-difference in the interferometer that does not allow the laser reflected from the two mirrors to cancel out, and therefore, resulting in the detection of the GW. Using two separate detectors, one can point the detectors to a particular point by looking at the time-delay between the two detectors, since the GWs will be detected by one detector before the other, due to the finite speed of light (which is the speed the GWs travel with). In this project, the coordinates of the Sun are used and updated over the period of the analysis to set this time-delay. This changes dynamically because the distance and location between us and the Sun changes dynamically, but
astropy has a nice function that tracks the position of the Sun.
The sixth LIGO science run data was used in this analysis, which has a maximum strain sensitivity between and (strain is a dimensionless quantity measured by the change in length over the original length of the laser’s path). It is also now available (along with even newer data) to the public at: https://losc.ligo.org/data/ so you can download it and re-create this analysis.
First, we download the entire LIGO S6 run (you can do so from here)
LIGO has its own TimeSeries data strucutre, to use it you would have to download gwpy which we can then initialise by:
from gwpy.timeseries import TimeSeries starttime = 969062862.0 # start of the S6 run in GPS Time endtime = 969063629.0 # end of it. gpsStartH = starttime durationH = endtime - starttime Xspacing = 2.44140625E-4 # the time steps at which LIGO records data, in seconds. # initialize objects holding the strain signals for the H1 (shorthand for Hanford, Washington detector), and L1 (short for Livingston, Louisiana detector) strainH = TimeSeries.read('S6framesH1.lcf',channel='H1:LDAS-STRAIN', start=starttime, end=endtime) strainL = TimeSeries.read('S6framesL1.lcf',channel='L1:LDAS-STRAIN', start=starttime, end=endtime)
The frequency range that LIGO is most sensitive to is between 100Hz and 150Hz, which can be seen from this figure (originally from this paper):
Therefore, we use a bandpass filter to get rid of the data (well, mostly the noise) that is outside of this range. One other thing we need to filter out, is the notch-lines that can be seen from the sensitivity plot. These are due to known instrumental spectral lines and cause extremely high noise in the data. While a complete understanding of the filtering process might require a course on electronics, here are the basic steps:
from scipy.signal import butter, filtfilt, iirdesign, zpk2tf, freqz import numpy as np # let's define a filtering function def filter_data(data_in,coefs): data = data_in.copy() for coef in coefs: b,a = coef # filtfilt applies a linear filter twice, once forward and once backwards. # The combined filter has linear phase. data = filtfilt(b, a, data) return data # and now a function that gets the filter coefficients that the function needs def get_filter_coefs(det,fs=4096): import numpy as np from notchfilt import iir_bandstops # assemble the filter b,a coefficients: coefs =  # bandpass filter parameters lowcut = 100 highcut = 150 order = 4 # bandpass filter coefficients nyq = 0.5*fs low = lowcut / nyq high = highcut / nyq bb, ab = butter(order, [low, high], btype='band') coefs.append((bb,ab)) # You can see these lines in the ASD above, so it is straightforward to make this list. # The L1 and H1 lines are different, so we need to define them for both: if det=='L1': notchesAbsolute = np.array([120.0, 139.94, 145.06, 108.992]) elif det=='H1': notchesAbsolute = np.array([120.0, 139.95, 140.41, 108.992]) else: print 'Error: Detector can only be H1 or L1' exit() # notch filter coefficients: for notchf in notchesAbsolute: bn, an = iir_bandstops(np.array([[notchf,1,0.1]]), fs, order=4) coefs.append((bn,an)) return coefs
Finally, we get much cleaner data, which looks like this (note the 3 order of magnitude difference on the y-axis): One would have to be careful about throwing away the first few minutes of the data after filtering since the filter leaves a sharp response at the start when the filter is turned on. (wondering why?)
The original LIGO S6 data has a frequency of 16384 data points per second. Since the maximum frequency used after filtering is 150 Hz, the data contains more information than is useful. The data was down-sampled to a frequency of 512 data- points per second, in order to increase the speed of analysis by a factor of about 32 times, without losing any useful information.
We can use cross-correlation between the signal from the two detectors and then we can set upper limits on the strain signal of gravitational waves emitted from the sun that could be detected by LIGO as:
but a better (and much more complicated) way to do this is using Bayesian inference.
We use Bayesian inference to get the joint likelihood for a GW signal in both detectors, that is, for a timestamp :
We use Gaussian priors on and with standard deviations and respectively, and zero mean.
We marginalise and by multiplying the probability by these priors and integrating over and between and .
We use matrix notation as in Appendix B of this paper writing:
where is a data vector,
is a design matrix, is a parameter vector,
are the data covariance matrix and its inverse.
For the Gaussian priors on the amplitudes we use a covariance matrix (and its inverse) of
We integrate analytically following the form of equation B2 of this paper, writing this as
We set a Jeffrey’s prior on : representing our lack of knowledge of it, and then marginalising over as in the equation:
In practice, we do this numerically for a range of values of between 0 and . Finally, we sum over .
The following figure shows the probability distribution for a range of strain values. The strain values were chosen empirically to suit the amount of data used. Using 346 hours of the LIGO S6 data we find that the results reveal no plausible signal for solar GW detection, as the probability peaks at the zero value. We still aim to use more data, along with the recent and more-sensitive O1 data from advanced LIGO, to measure a more accurate probability distribution.
The following figure is a cumulative plot created using the same results and values as in the previous figure. This was used to infer the at 95% confidence, to set an upper limit on solar GWs. This was done using a simple linear interpolation between the two nearest points: the 11th and 12th. The value corresponding to 95% confidence was found to be . This is about 5 to 6 orders of magnitude lower than the upper limit set in the previous project at .
Major breakthroughs have been made recently by LIGO and the LSC, confirming once and for all that GWs exist and that LIGO works as a GW detector. Most GW searches target dramatic and violent events in the Universe such as binary black-hole mergers, supernovae and GRBs, pulsars, and the stochastic GW background. We take a Bayesian approach in searching for GWs from the Sun, emitted possibly due to an MHD dynamo at the core of the Sun. We use the LIGO S6 data, filtered between 100 and 150 Hz, as well as notch-filtered to cancel the effect of the S6 instrumental lines. We use a mostly-analytical Bayesian analysis to calculate the probability for a GW signal emitted from the Sun, and find that the current results, using 346 hours of the S6 data, do not reveal a GW detection. We set an upper limit on solar GWs at with 95% confidence.
Other than the trivial, yet probable, reasons for not detecting solar GWs, such as detector sensitivity and amount of data used in the analysis, which we aim to address by using more as well as more-sensitive data soon, it is possible that GWs from the Sun are at lower frequencies than we looked for with the current LIGO data. Future space-based GW detectors such as LISA (Laser Interferometer Space Antenna), will be able to detect low-frequency GWs, much lower than LIGO’s peak sensitivity range, which might be more suitable for solar GWs, particularly if they were emitted due to an MHD dynamo.