The ITF was found in DOWN and UPGRADING mode with the controlled shutdown in progress.
All subsystems were placed in a safe state by their respective experts, and the DAQ was shut down at approximately 16:00 UTC.
All the DAQ servers have been stopped . Operation complete at 15h41m-UTC
Note: the DaqBox server running configurations stored in ~virgorun/.local/share/DaqBox have been moved in the ~virgorun/.local/share/DaqBox/20260325-ComputingShutdown directory to ensure the full reconfiguration of these DaqBoxes at the recovery time
We completed the Acl v1r39p17 version deployment
The Acl update on the SUSP rtpc(rtpc4) has been postponed after the computing shutdown, in agreement with the TCS and commissioning coordinators
ITF Found in LOW_NOISE_3_ALIGNED State and COMMISSIONING Mode.
All times are UTC.
07:00 ITF manually in DOWN state to start the powering off of the ITF for the upcoming controlled shutdown.
07:13 Earthquake in Fosdinovo (MS), ML 4.0 (INGV);
08:27 Set PAUSE to nodes: ITF_LOCK (Operator), SQZ_FLT (Operator), and DET_MAIN (Gouaty);
08:32 Opened position loops via VPM of: SIB2_SBE, SDB2_SBE, SNEB_SBE, SWEB_SBE, since they opened themself between 7:44 and 7:50 maybe related to the Earthquake in Fosdinovo (MS);
08:44 UPGRADING Mode set;
08:52 INJ OFF (INJ PMC FLIP set to "UP") (Gosselin);
8:55 - 09:03 TCS Lasers all off (Melo) and Chillers Guardian system off (Ballardin), see entry #68947;
09:10 - 10:38 Shelved TCS Flags in DMS: mean_TCS_NI_CO2_PWRLAS_10, mean_TCS_WI_CO2_PWRLAS_10, mean_TCS__GUA_WI_FLOW_10, mean_TCS__GUA_NI_FLOW_10, mean_TCS__GUA_BCK_FLOW_10, TCS__GUA_Rules, mean_TCS__GUA_BCK_TE_10, mean_TCS__GUA_NI_TE_10, and and mean_TCS__GUA_WI_TE_10 up to 07-04-2026 09:00 UTC (Operator under request of Nardecchia);
09:46 NI/WI Etalon loops open (Operator);
09:54 Opened SBE_EIB position loops: PID and Actuation (Operator);
09:58 Set PAUSE to nodes: OMC_LOCK, and ITF_CONDITIONS (Operator);
10:29 - 10:37 Automation Metatron nodes VPM process turned off (Bersanetti);
The larger part of the VPM processes has been shutted down during the morning and the afternoon;
ITF left in DOWN State and UPGRADING Mode;
Relevant logbook entries reporting switchoff sequences:
- TCS actuators in CB switched OFF (#68947, and #68947);
WE RH and NE RH have been turned off at 14.28 UTC and 14.29 UTC, respectively.
Immediately after it the related processes on the VPM have been stopped as well.
Switch-off sequence:
Note: outputs disabled; power supplies left ON.
WE RH and NE RH have been turned off at 14.28 UTC and 14.29 UTC, respectively.
Immediately after it the related processes on the VPM have been stopped as well.
We checked that the loops were working well after the ACL update.
We sent SQZ_MAIN SQZ_LOCKED_NO_FC and it went at the first attempr
We then aligned the filter cavity but we had to modify some of the code lines in the FC Alignment interface.
We locked the FC both with only the mirror and also with the laser. We updated the gains. SQZ_CC GAIN=4 and LFC_Z_GAIN = 6 instead SQZ_CC GAIN=1.55 and LFC_Z_GAIN = 6
We also run a SQZ CC Phase can with SR tuned at 17:34 UTC CC GAIN 40000. We also noticed that the CC sensing noise is dominant above 100 Hz
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Once the detuning of SR started we open the shutter, we set 4.1 rad of SQZ angle and we run autoalignment loop for more than one hour. Then we did the following measurement:
ITF found in COMMISSIONING, LN3_ALIGNED
17:00 UTC tuned squeezing phase scan (Vardaro)
18:05 UTC detuned phase scan: SRCL set set to 11
18:30 UTC HVAC oncall called due to a boiler failure at 1500W. Intervention scheduled for tomorrow morning, flag shelved for the night.
22:00 UTC At the end of the shift I left the ITF in COMMISSIONING, LN3_ALIGNED
The following report has been submitted to the On-call interface.
On-call events -> Air Conditioning
Title: WAB boiler failure
Author(s): andreazzoli
| Called at: 19:42, 24-03-2026, by: Zaza |
| Remote intervention: Started: ; Ended: |
| On-site intervention: Started: 20:08, 24-03-2026; Ended: 21:30, 24-03-2026 |
| Status: Resolved |
| Operator when issue resolved: Zaza |
Details:
Due to the oil-fired boiler shutting down, the water temperature in the hot water circuit has dropped to 20 °C (set to 50 °C).
Boiler operation has been restored. The system is operational but requires a more thorough inspection, which will be carried out tomorrow morning. Soldani D. has been informed.
* Note that any files attached to this report are available in the On-call interface.
ITF Found in LOCKING_ARMS_BEAT_DRMI_1F State and COMMISSIONING Mode.
All times are UTC.
07:00 MAINTENANCE started.
07:09 Acl Update step 3 Started (Masserot).
07: 20 INJ beam blocked for INJ Acl Update (Gosselin, remote), then I requested DOWN to INJ_MAIN Metatron node and I paused ITF_LOCK.
07:27 Acl Update step 3 Completed for INJ and SQZ, while TCS, and SAT postponed afted the shutdown (Masserot, #68939).
07:35 INJ beam block removed (Gosselin, remote).
07:43 INJ in IMC_RESTORED.
10:43 ITF in LOCKED_ARMS_IR.
11:00 MAINTENANCE ended.
12:57 ITF in LOW_NOISE_3_ALIGNED;
Task performed by the operator during the maintenance:
- From 08:25 to 8:35 OMC Lock;
- From 08:38 to 08:59 OMC Scan;
- 09:08 TCS Chillers Refill;
- 09:18 TCS Thermal Camera Reference;
- 11:15 TCS Power Checks at the Pickoff:
| | CH [W] | INNER DAS [W] | OUTER DAS [W] |
| W | 0.434 | 0.025 | 0.254 |
| N | 0.665 | 0.082 | 0.571 |
This line's amplitude continuously decreased and cannot be clearly seen after midnight today.
In the injection alignment strategy, the vertical setpoint of the BPC is still defined by the DC NF QPD located on the EIB.
This morning, we performed a quick test to check whether this vertical setpoint has any impact on the RFC alignment. We shifted the beam by approximately 1300 µm and observed an increase in the RFC transmission of about 15% (see figure attached).
During the final steps, we also noticed that IMC_TRA increased to 18.2 W, then dropped to 17.4 W before the system eventually unlocked. Observing the reflected beam, it was clear that the beam was misaligned on the IMC, and its transmission was unstable.
We then returned the BPC vertical setpoint to its initial position.
Although this does not necessarily mean that the RFC misalignment originates from this vertical shift, it would be worth dedicating more time to investigate whether a different IMC alignment, combined with a higher BPC Y setpoint, could lead to improved RFC transmission.
All NCals have been stopped during the maintenance.
I expect this bump to be due to the squeezer shutter being opened but the squeezing not being injected. From what I remember the shutter on SDB1 towards squeezing was openend on March 12-13, then closed for the weekend and reopned in the middle of the week. And it has remained opened since then.
This wandering line was likely already present in the March 13 data, but not in the March 12 data, i.e., during the first recovery after the electrical shutdown. The onset of the source may therefore be uncorrelated with the shutdown.
I am attaching two plots showing the median-normalized spectrogram of the strain: one covering March 12–13, and another starting from March 18, where the spectral line is particularly visible with the better sensitivity in the detuned SR configuration achieved in recent days.
Could be related to this issue found in the past: #61815?
The Acl's servers running on the following rtpcs have been restarted with the Acl release v1r39p17
Acl 's upgrade second block complete
We completed the Acl v1r39p17 version deployment
The Acl update on the SUSP rtpc(rtpc4) has been postponed after the computing shutdown, in agreement with the TCS and commissioning coordinators
A faint but large spectral artifact (bump) can be seen wandering around the 50 Hz mains spectral lines as one can see on figure 1. This artifact can go down to around 40 Hz and seems to have appeared around march 17th/18th. It was not present at the beginning of february for instance.
Recent BruCo reports do not seem to find any coherent channels with it.
This wandering line was likely already present in the March 13 data, but not in the March 12 data, i.e., during the first recovery after the electrical shutdown. The onset of the source may therefore be uncorrelated with the shutdown.
I am attaching two plots showing the median-normalized spectrogram of the strain: one covering March 12–13, and another starting from March 18, where the spectral line is particularly visible with the better sensitivity in the detuned SR configuration achieved in recent days.
Could be related to this issue found in the past: #61815?
I expect this bump to be due to the squeezer shutter being opened but the squeezing not being injected. From what I remember the shutter on SDB1 towards squeezing was openend on March 12-13, then closed for the weekend and reopned in the middle of the week. And it has remained opened since then.
This line's amplitude continuously decreased and cannot be clearly seen after midnight today.
ITF found at CARM_NULL_!F, relocking up to LN3_ALIGNED. COMMISSIONING mode.
The planned activity on "Acl Update step 2 (Masserot)" went on without major issues from 07:00 to 10:00 UTC.
Infrastructure civil works by external company, close to MC building, from 11:00 to 13:00 UTC. ITF remained stable locked at LN3_ALIGNED.
considering all the channels:
DQ_BRMSMon_BRMS_SEA_SEIS_100mHz_200mHz_ENV_CEB_SEIS_N_mean.
DQ_BRMSMon_BRMS_SEA_SEIS_100mHz_200mHz_ENV_CEB_SEIS_V_mean
DQ_BRMSMon_BRMS_SEA_SEIS_100mHz_200mHz_ENV_CEB_SEIS_W_mean
DQ_BRMSMon_BRMS_SEA_SEIS_300mHz_500mHz_ENV_CEB_SEIS_N_mean
DQ_BRMSMon_BRMS_SEA_SEIS_300mHz_500mHz_ENV_CEB_SEIS_V_mean
DQ_BRMSMon_BRMS_SEA_SEIS_300mHz_500mHz_ENV_CEB_SEIS_W_mean
we can set the limits for which it is more probable to be not locked than locked
| channel | |
|---|---|
| 100mHz_200mHz_ENV_CEB_SEIS_N | 0.8e-6 |
| 100mHz_200mHz_ENV_CEB_SEIS_V | 0.6e-6 |
| 100mHz_200mHz_ENV_CEB_SEIS_W | 1e-6 |
| 300mHz_500mHz_ENV_CEB_SEIS_N | 1.8e-6 |
| 300mHz_500mHz_ENV_CEB_SEIS_V | 0.8e-6 |
| 300mHz_500mHz_ENV_CEB_SEIS_W | 2e-6 |
In the past, a 1 Hz comb was observed to originate from the TCS chillers when operating outside the power-save mode (elog #40113, #63357, #63358)
Following the electrical power outage on Feb. 28th, a similar comb is observed again. The comb is visible in the current and voltage monitors UPS_CURR_* and UPS_VOLT_*, as well as in the magnetometers CEB_MAG (CEB hall) and IB_MAG (IB tower base), Figure 1.
The noise comb is visible in the following time intervals, Figure 2:
time slot 1--> Mar 2, ~09:53-09:55 – 10:30-10:35 UTC
time slot 2 --> Mar 2, ~14:54 – Mar 6, ~12:20 UTC
time slot 3 --> Mar 6, ~14:30 - 14:50 UTC
According to the logbook, several recovery activities were carried out on Mar 2, shortly after the power outage. Among them are the TCS lasers and chillers recovery (elog #68787) and the INJ recovery (elog #68793).
Time slot 1
During the morning, the INJ chillers were switched on first (top), followed in time by the CO2 main laser chillers (middle), Figure 3.
The switch-on involved the EE room chiller (*CHILLER_NEO*, for NEOVAN electronics) and the chiller in the chiller room (*CHILLER_HEAD*, for the NEOVAN head), while the chiller serving the beam dumps and slave laser was already on and is not shown.
Two distinct sets of 1 Hz combs are observed in the UPS channel (bottom), each appearing and disappearing at different times, indicating that they are likely associated with the operation of different devices. The INJ chillers are switched on before the occurrence of the first comb, although no direct temporal coincidence is observed. The CO2 main laser chillers are switched on before the occurrence of the second comb; however, a clear correlation is observed at switch-off, as the disappearance of this comb coincides with the switch-off of the CO2 chillers.
The INJ chiller in EE room is a spare unit of the TCS system. A dedicated test was performed on Mar 9, during which the power-save mode of the INJ chiller was temporarily disabled (16:30–16:35 UTC). Following this change, the 1 Hz comb reappeared in the CEB and IB magnetometers, Figure 4. Thus, such device can generate this type of noise when operating outside the power-save mode.
Overall, the switch-on times do not show a strict one-to-one correspondence with the comb occurrence, while a clearer coincidence is observed for the switch-off of the CO2 chillers.
Time slot 2
The 1 Hz comb is observed to persist from Mar 2 to Mar 6. During the afternoon of Mar 2, as reported in elog #68787, several activities were carried out in the TCS area, including the identification of a malfunction in one of the RF driver of the NI CO2 laser. As shown in Figure 5, the switch-on of the CO2 chillers does not show a clear coincidence with the presence of the 1 Hz comb. In addition, the elog entry reports that the chillers were verified to be operating in the correct configuration (power-save mode). This suggests that the observed comb in this time period may not be directly related to the chiller operation, and could instead be associated with the RF driver malfunction.
Time slot 3
On Mar 6, activities were carried out in the TCS area, including the replacement of the NI CO2 laser RF driver (elog #68263). As shown in Figure 6, the comb disappears around 12:30 UTC, in coincidence with the switch-off of the NI chiller, although the chillers were previously verified to be operating in the correct configuration. However, during the subsequent activities (Figure 7), including tests and the RF driver replacement, no clear one-to-one correspondence is observed between the chiller operation and the presence of the comb. In particular, after the intervention, the system was fully re-powered, with the lasers switched on at 14:31 UTC, and normal operating conditions were progressively recovered.
These observations suggest that, while the chillers can generate a 1 Hz comb (as also supported by dedicated tests), the RF driver malfunction may represent an additional or independent source contributing to the observed comb structure. Notably, the comb is no longer observed following the RF driver replacement.
Friday we performed the injection with the ITF locked in LOW_NOISE_3_ALIGNED.
We tested three current levels:
We kept injecting this signal until the ITF unlocked at 12:39 UTC.
The comb is well visible in hrec up to a few hundred Hertz. In addition to the current and the ddp (ELECTRIC) monitors, the comb is also visible in all magnetometers in CEB (Fig. 2). In some places (BS, CEB, ...) the injected 11 Hz line exceedes 10 nT (Fig.3)
The comb amplitude in hrec has a steeper frequency slope than in the monitors and magnetometers, roughly 1/f^2 steeper (Fig. 2).
From a quick look the amplitude of the signal scales proportionally in hrec and all sensors (Figure 4).
In order to test if magnetometers were sensing a true magnetic field or they were fooled by a fluctuating ground
We trust the magnetometers reading is a true magnetiec field produced in the CEB hall.
I ve runned the statistical analysis in 1 year of data understanding which are the ocean conditions that allows to lock the ITF.
I ve used the DQ_BRMSMon_BRMS_SEA_SEIS_100mHz_200mHz_ENV_CEB_SEIS_N channel to evaluate the distribution of the weather condition and evaluate which is the number of succesfull locks (normalized on the total number) against the unsuccessful ones.
The ratio of these quantities gives an idea on the lock probability.
In figure 1 it is visible that for 1.4e-6 the probabilty to dont reach the lock (META =135) is 10 time bigger than the probability to lock.
A reasonable value should be then 1e-6. to be addressed on all the channels
At 13:43 UTC, all NCals and associated PCal lines have been moved from near 36 Hz to near 18 Hz.