I add a few plots that I was preparing for a similar entry. Fig 1 and fig 2 show the behaviour of COMMp, spoiled by the quality of the dither signals. The clip at +- 10 added yesterday on the dither signals reduced a bit the effect on the alignement, but the quality was still quite bad because probably this morning the disturbance was a bit larger than yesterday. Now the seismic conditions are getting better: we can hope to see also the alignment improving.
To explain better what is happening to dither signals: they are built demodulating the longtudinal correction on each mirror at the frequency of injected angular lines on the same mirror. It was a good strategy in the past, when the corrections were dominated by DARM (and DARM was better below 10 Hz). The control of CARM SLOW changed a bit the situation: its correction signal can be larger than DARM correction, because of sensing noise. In that case, the demodulated signal become blind to the angular lines. As we can see in fig 3, CARM SLOW is sensitive to seismic noise and wind. We should check if this sensitivity is worsened recently. I remember about fringes disturbing the RFC error signal in the past, which could be cured improving the efficency of the Faraday isolator, or RFC aligment, or INJ general alignment.
A simple solution for improving the dither signals is to increase the line amplitude: a side effect is a larger injection of noise on DARM at the duble frequency, around 15 Hz. That frequency region is useless for the BNS, so the cost would be low.
There are better solutions: the modulation/demodulation strategy can be for sure improved a lot, using a signal just based on DARM, but a bit of work is required. We also know that the centering signals generated by the cameras are good enough and could replace the dither signals.
Since yesterday we are observing an anomalous behaviour in some corrections signals: looking at an unlock, many of them are similar to the one in Figure 1, where the ITF simply drifts away. However, a weird thing noticeable is a big difference in the corrections of WE/NE, when the former has much higher values at low frequency (mode at ~0.7 Hz?).
Looking at the spectra in Figure 2 the difference is visible; it is also visible the opposite behaviour of the MAR corrections (NI noisier than WI), although in a higher frequency range.
Comparing the spectra with the ones from the other night (after the change in behaviour around midnight), we can see the corrections being much noisier below 10 Hz, I don't know if this can be attributed to the weather conditions alone. Some of the spectra show structures similar to scattered light, and we can see in the 6-10 Hz region the dithering lines being completely covered.
Looking at DARM itself, where those signals are demodulated from, in Figure 4 there are three subsets of data of the same 2 minutes I used for the previous plots: the noise at low frequency is very non stationary, and going towards the unlock it becomes much higher, spoiling most probably the dithering signals used for the alignment of the soft modes.
This was observed also yesterday, when unrealistically high signals were observed before an unlock; already yesterday Paolo put in place a clipping for the signals used by the alignment loops, but this cannot fix the quality of the signals.
I add a few plots that I was preparing for a similar entry. Fig 1 and fig 2 show the behaviour of COMMp, spoiled by the quality of the dither signals. The clip at +- 10 added yesterday on the dither signals reduced a bit the effect on the alignement, but the quality was still quite bad because probably this morning the disturbance was a bit larger than yesterday. Now the seismic conditions are getting better: we can hope to see also the alignment improving.
To explain better what is happening to dither signals: they are built demodulating the longtudinal correction on each mirror at the frequency of injected angular lines on the same mirror. It was a good strategy in the past, when the corrections were dominated by DARM (and DARM was better below 10 Hz). The control of CARM SLOW changed a bit the situation: its correction signal can be larger than DARM correction, because of sensing noise. In that case, the demodulated signal become blind to the angular lines. As we can see in fig 3, CARM SLOW is sensitive to seismic noise and wind. We should check if this sensitivity is worsened recently. I remember about fringes disturbing the RFC error signal in the past, which could be cured improving the efficency of the Faraday isolator, or RFC aligment, or INJ general alignment.
A simple solution for improving the dither signals is to increase the line amplitude: a side effect is a larger injection of noise on DARM at the duble frequency, around 15 Hz. That frequency region is useless for the BNS, so the cost would be low.
There are better solutions: the modulation/demodulation strategy can be for sure improved a lot, using a signal just based on DARM, but a bit of work is required. We also know that the centering signals generated by the cameras are good enough and could replace the dither signals.
21:00 UTC ITF found attempting to lock in BAD WEATHER
21:35 UTC ITF set to LOCKED_ARMS_BEAT_DRMI_3F
21:42 UTC ITF set to LOCKED_ARMS_IR since it couldn't hold LOCKED_ARMS_BEAT_DRMI_3F. Waiting for seismic, wind and sea activity to lessen
00:26 UTC data loss (pic 1) SUSP_Fb error message: 2025-08-30-00h26m01-UTC>ERROR..-[TolmFrameBuilder::FrontEnd] Rtai DAQ FIFO overrun -> acquisition is reset
3:40 UTC ITF reached LOCKED_ARMS_BEAT_DRMI_3F and was able to keep it.
4:15 UTC given that LOCKED 3F's stability was suggesting a slightly softer impact of the weather, I set Lock Request to CARM_NULL_1F to see the unlocks pattern and understand whether to call either ISC or TCS expert.
4:38 UTC ITF back to LOCKED_ARMS_BEAT_DRMI_3F - bad weather again
TCS
At 4:30 given the increase in both WI_CO2_CH_PICKOFF (pic 3)and WI_CO2_CH_PWRLAS (pic 4) and the continuous unlocks that had started the previous day and couldn't be investigate due to bad weather, I called the expert.
SBE
from 21:19 UTC on, SIB2_SBE position loop opened 15 times, that makes a total of 16 from 18:06 UTC with the one in the previous shift (pic 2). After a quick check with Bersanetti regarding whether or not it was possible to try a lock - we decided it wasn't useful given the weather - we agreed to wait to contact the expert because environmental conditions were too unstable to allow working on SBEs.
Guard Tours (UTC)
21:08 - 21:44
23:01 - 23:37
1:02 - 1:38
3:03 - 3:40
4:52 -
The interferometer was found in Science mode at the beginning of the shift. It unlocked at 13:05:14 UTC (GPS 1440507932). At 14:04 UTC, after several unsuccessful attempts to lock, where the ITF did not go after the CARM_NULL state, the operational mode was switched to TROUBLESHOOTING, following instructions from Bersanetti. This change was made to allow for further investigation into the ongoing locking difficulties. At 15:21 UTC, the interferometer was moved to the BAD_WEATHER mode due to intense microseismic activity; see attached figure.
Guard tours (UTC)
SBE
SBE_SIB2 loop opened at 18:06:02 UTC. Closed at 18:43:14.
In the early hours of August 29, shortly after midnight, a drop of about 3 Mpc was observed in the BNS range; see Fig. 1.
A comparison of the sensitivity curves (Fig. 2) shows a significant increase in broadband noise starting from 50 Hz.
To investigate the origin, I performed a brute-force correlation with all ~15k _mean and _rms channels from the trend frame. Attached is the list of those with the highest Pearson correlation values (in absolute terms). Beyond several channels related to B4, the most interesting correlation (though it could well be coincidental, as often happens with this kind of analysis...) is a nearly simultaneous jump in the RMS of the acceleration at the Mode Cleaner. This RMS remains at the new level as of now (see Fig. 3). This might provide a useful clue on where to start looking, but I leave the detailed assessment to the experts.
Finally, an inspection of the DMS playback did not reveal any particular red flags at the time of the drop.
This morning, Maria was contacted by the control room and asked to revert the change in power of the NI DAS outer ring that had been performed on Tuesday (67571). She carried out the change at around 9:00 UTC.
I’m not sure if this step improved the situation, since the operator’s entry also mentions bad weather (67592).
The power on the pickoff is now 0.502 W and 3.1 W injected into the ITF.
This morning the ITF unlocked at 5:00UTC, for the whole shift it wasn't possible to achieve a stable lock at LOW_NOISE_3 with unlocks at the CARM_NULL steps, also the wind and the sea activities that increased a bit during the morning didn't help to understand some possible problem, ITF locked just at the end of the shift, science mode started at 12:43UTC.
Sub-system reportsSUSP
at 5:37UTC and 8:34UTC the WI local controls opened by the guardian following the unlock, properly closed.
The data loss around 2025-08-29-16h00m16s-UTC is due to the fact that SUSP_Fb was unable to transmit its frames to the FBmFFE server :
2025-08-28-16h00m16-UTC>WARNING-FdFrMrgr: Could not wait longer for frame parts, output 1440432025.8, nSources=15 first:SSFS_Fb
2025-08-28-16h00m16-UTC>WARNING-FdFrMrgr:1440432025.8 frames from SUSP_Fb start to be missing (isReady)
2025-08-28-16h00m16-UTC>WARNING-FdFrMrgr:1440432025.8 frames from SUSP_Fb start to be missing (out)
2025-08-28-16h00m16-UTC>INFO...-CfgReachState> Active(Active) Ok
The data loss around 23h27m37s-UTC is due to the crash of the FbmFFE server:
[Fri Aug 29 01:08:45 2025] FdIOServer.exe[32038]: segfault at 16ad ip 00007f44ac0e88de sp 00007fffc39d2120 error 4 in libframel.so.8.47.1[7f44ac0d6000+5b000] (Note: The timestamp could be inaccurate!)
The FbmFFEDy server, reading the FbmFFE shared memory, reports the following messages
2025-08-28-23h27m53-UTC>INFO...-CfgReachState> Active(Active) Ok
2025-08-28-23h30m18-UTC>INFO...-InputDir: first file to open after a scan:/dev/shm/VirgoOnline/FbmFFE/V1FFE-1440459035d-e.gwf
2025-08-28-23h30m18-UTC>WARNING-FdIOGetFrame: miss 165.2 seconds between 1440458870.4 and 1440459035.6
2025-08-28-23h30m18-UTC>INFO...-Input frames are back; gps=1440459035 latency=0.5
According the FmStol01 monitoring , there is a missing data during 170s from GPS1440458870 (2025-08-28-23h27m32-UTC) to GPS1440459040 (2025-08-28-23h30m32-UTC)
21:00 UTC ITF found in SCIENCE
23:27 UTC Data loss (attached pic.): FbmFFE process stopped abruptly as it did at 16:00 (pic.1 from #67587). Restarted from VPM.
5:00 UTC ITF left in SCIENCE
Guard Tours (UTC)
- 21:38
23:03 - 23:38
1:03 - 1:39
3:13 - 3:48
The data loss around 2025-08-29-16h00m16s-UTC is due to the fact that SUSP_Fb was unable to transmit its frames to the FBmFFE server :
2025-08-28-16h00m16-UTC>WARNING-FdFrMrgr: Could not wait longer for frame parts, output 1440432025.8, nSources=15 first:SSFS_Fb
2025-08-28-16h00m16-UTC>WARNING-FdFrMrgr:1440432025.8 frames from SUSP_Fb start to be missing (isReady)
2025-08-28-16h00m16-UTC>WARNING-FdFrMrgr:1440432025.8 frames from SUSP_Fb start to be missing (out)
2025-08-28-16h00m16-UTC>INFO...-CfgReachState> Active(Active) Ok
The data loss around 23h27m37s-UTC is due to the crash of the FbmFFE server:
[Fri Aug 29 01:08:45 2025] FdIOServer.exe[32038]: segfault at 16ad ip 00007f44ac0e88de sp 00007fffc39d2120 error 4 in libframel.so.8.47.1[7f44ac0d6000+5b000] (Note: The timestamp could be inaccurate!)
The FbmFFEDy server, reading the FbmFFE shared memory, reports the following messages
2025-08-28-23h27m53-UTC>INFO...-CfgReachState> Active(Active) Ok
2025-08-28-23h30m18-UTC>INFO...-InputDir: first file to open after a scan:/dev/shm/VirgoOnline/FbmFFE/V1FFE-1440459035d-e.gwf
2025-08-28-23h30m18-UTC>WARNING-FdIOGetFrame: miss 165.2 seconds between 1440458870.4 and 1440459035.6
2025-08-28-23h30m18-UTC>INFO...-Input frames are back; gps=1440459035 latency=0.5
According the FmStol01 monitoring , there is a missing data during 170s from GPS1440458870 (2025-08-28-23h27m32-UTC) to GPS1440459040 (2025-08-28-23h30m32-UTC)
Today, around 19:13 UTC, I have restarted Hrec to use the new bias file /virgoData/Hrec/Hrec_bias_202508_with50.txt which is based on checkhrec measurements done between July 14th and Aug 25th. This update of the bias file is a follow-up of the observation that, since start of July, the hrec/hinj bias was impacted by the changes that occured on the ITF.
Plot1 shows the residual bias before the restart of Hrec.
Plot2 shows the residual bias after the restart of Hrec.
At the start of the shift, the interferometer just recovered from an unlock likely due to intense wind, and was in Science mode for about 30 minutes. Another unlock occurred at 13:41:19 UTC, GPS 1440423697. After a series of unlocks for the strong wind (50+ km/h) before reaching LOW_NOISE_3 state, at 14:35 UTC the interferometer has been set to Commissioning mode, moving up the scheduled activities. LOW_NOISE_3 recovered at 18:57 UTC, after the calibration activities.
Below, the list of performed activities:
15:35 - Started Measurement of the TF between CAL and Sc channels, and of PCal sensing delays. Concluded at 15:48 UTC
16:20 - Measurement of actuators response for PR and BS mirrors. Issue with DET_MAIN shutter closed when setting CALIBRATED_PRWI. Restored by putting DET_MAIN back to INIT, and restarting the calibration procedure. Concluded at 17:17 UTC
17:20 - Measurement of actuators response for NI,WI mirrors. Concluded at 18:40 UTC
15:48 - TCS, Cifaldi check of the powers. Operation competed at 16:18 UTC
19:13 - CALI, Verkindt: Hrec bias updated.
The interfermeter unlocked again at 19:30:51 UTC, GPS 1440444669, when a hailstorm hit the site, and, given that we were already 30 miutes past the end of the Commissioning slot, we decided to postpone the Scattered Light calibration activity by van Haevermaet. The interferometer was set beck to PREPARE_ASCIENCE mode (and AUTOSCIENCE_ON)
Other events to report:
Guard tours (time in UTC)
Since the input beam was quite decentered on the ITMs, we translated the PR both horizontally and vertically to better recenter it (monitoring at the same time its position on the BPC quadrants).
The intervention solved the issue as shown by the latest UPV run: https://scientists.virgo-gw.eu/DataAnalysis/DetCharDev/online/upv/2025/20250828/last_day_1440280822-1440367222/V1:Hrec_hoft_16384Hz/perf/vp.html
ITF found in LOW_NOISE_3 and in Science Mode. It unlocked at 3:57 UTC, but the ITF kept systematically unlocking at LOCKING_CARM_MC_IR. An issue highlighted by the DMS was a strong oscillation on the WI F7: while investigating we found that the data were abnormal (see attached plot).
With the support of Gherardini we opened the F7 loops and reload the DSP, which seems to have fixed the issue. Relock in progress.
Guard Tour (UTC)
22:28 - 22:52
1:02 - 1:24
3:27 - 4:00
DAQ
From around 14:00 UTC the signals related to INJ and DET ACS weren't avaiable anymore from DMS. After restarting the process BACnetServer at 21:01 UTC they were acquired correctly again.
13:00 UTC ITF found in SCIENCE
14:56 UTC ITF in ADJUSTING - WI Etalon Servo Disabled/re-enabled to perform WI power supply swap (#67582 Dattilo, Zaza)
15:25 UTC ITF DOWN
16:15 UTC ITF in SCIENCE
Guard Tours (UTC)
18:02 - 18:50
20:28 - 22:50
The time pattern and magnetic witness point to a noise from the power supply of the heating belts, as evidenced by Bas in O3: https://logbook.virgo-gw.eu/virgo/?r=48521
sorry I missed the previous post, pointing to the same problem. The time pattern is clear.
However, consulting with Federico and Vincenzo: the cause is likely the power supply, whose fans are old and do not cool down enough, the power supply goes in protection ad switches on/off.
The suggestion is first inspect and then replace.
These glitches look similar to something we’ve already seen in April 2024: #64114.
At the beginning of the shift, the interferometer has been found relocking, with AUTOSCIENCE_ON. Succesfully back to LOW_NOISE_3 state at 5:29:12, and SCIENCE mode at 5:32:12 UTC. Operational status maintained throughout the entire shift.
Their frequency distribution seems to go from around 15 Hz up to almost 60 Hz.
First glitch is most likely a 25-min glitch. (Picture shows the whitened spectrogram of Hrec_hoft_20000Hz)
These glitches (I guess these are the same... and 33 Hz or so might be the energy baricenter?) seems to have a precise periodic pattern, see the attached.
Last night's daily UPV run points V1:ENV_WI_MAG_{N/V/W} as witnesses for those glitches. Compare before and after applying these vetoes.