ITF found in LOCKED_ARMS_IR and COMMISSIONING Mode. The calibration activity at WE was concluded at around 16:00 UTC, after that the CAL Team moved to the NE Building.
At 17:20 UTC the preparatory work at NE was concluded, and I started to relock. After a brief manual realignment of PR and SR, the ITF was back in LOW_NOISE_3 at 18:17 UTC.
Under request of the crew, at 18:19 UTC I set the ITF in CALIBRATION mode and at 18:29 UTC I launched the python script inject_pcal_mechanical.py, which will keep measuring for the following 8 hours.
Calibration in progress, ITF left locked.
Other activities carried out during the shift:
From 13:00 to 13:50 UTC - Installation of new Power Supply for SQZ in DER (Montanari, Sposito)
We have tried to measure the noise of the IMC ramp-auto and the Ampli-EOM using monitoring channels of both electronic boxes.
Figure 1. Shows the check of the sensing noise of the spectrum analyzer. With a 50Ohm plugged on the input port we measure a noise level of 9 nV/rtHz. We have used the same settings for all other measurements, between 100kHz and 10MHz.
Figure 2 Shows the spectrum analyzer plugged on the "EO monit" output of the Ramp auto, which according to Jean-Pierre reads 1/20 of the output of the Ramp auto when a 50Ohm impedence device is used to read the EO monit output. The noise there is at a ~19 nV/rtHz level. After subtracting the measurement noise floor and multiplying by 20 this corresponds to an upper limit of ~300 nV/rtHz on the noise from main output of the Ramp auto. The EO monit is a lemo 3pin connector, so we used an addapter to plug it on the BNC input of the spectrum analyzer.
Figure 3. We have then measured the noise plugging the spectrum analyzer into the "EOM CORR MONIT" output of the Ampli EOM.
Figure 4. Shows the noise level with the IMC locked, ~260 nV/rtHz
Figure 5. Shows the noise level with the IMC unlocked, ~200 nV/rtHz.
Figure 6 Shows the noise level with no input from the ramp auto plugged into the Ampli EOM, ~335 nV/rtHz.
We forgot to make a measurement with the 50Ohm plug at the input of the Ampli EOM.
Figure 7 shows the noise at the output cable of the Ramp auto, the end of the cable that normally is plugged into the Ampli EOM, with the IMC unlocked. The noise was not stationary, with many bumps and lines sometimes small and sometimes large (see figure 8). The floor was ~40 nV/rtHz.
Figure 9. We wanted to plug the Ampl EOM corr monitoring into a digital demodulation channel. We found a cable in the atrium that was free and supposedly going to the Piscina, but we couldn't find afterwards the other end of it in the Piscina.
We plugged back the cable to the ampli EOM after this work.
Today, we installed the remote power supply for the Faraday units.
The SusDAQBridge process was restarted correctly (a reload was not enough) and we could check online that the SAT gains were being updated again both as INFO in the VPM and online through dataDisplay.
The Thread__init message appeared the first time at 2025-04-01-06h17m27-UTC, Below the details of the SusDAQBridge server
2025-03-30-08h32m58-UTC>ERROR..-worker: Sa_MC - F0_DC_ENBL: Got NaN. Dsp #42103 status: EXIT_DSP_NOTCONNECTED
2025-03-30-16h32m40-UTC>ERROR..-worker: Sa_WE - ACC1_Noise: Got NaN. Dsp #31013 status: EXIT_DSP_NOTCONNECTED
2025-04-01-06h17m27-UTC>ERROR..-Thread.__init__() not called
2025-04-01-06h17m37-UTC>ERROR..-Thread.__init__() not called
ITF found in relocking phase, Calibration mode.
At 6:52 UTC ITF in Commissioning mode and in DOWN; below the list of the activities communicated in control room:
SBE FC
At around 10:00 UTC, under request of M. Vardaro, I switched off the FCIM motor crate and I switched on the FCEM motor crate.
As Franco explained, the CoilsSb* issue is long standing and I agree it can be misleading.
However, the actuation for NE/WE/BS was correct, as we have checked during the evening, so that means that the relays were set correctly.
The same can be said for the gain values inside the DSPs, as without them the actuation would have been wrong, and probably we would have unlocked given the quite big difference between the LN1 and LN2 actuators configurations (more than a factor 10 if I recall correcly).
I can infer then that the issue was that the values of the gains were correct, but the update was not sent to the DAQ, so maybe the hiccup was at the level of SusDAQBridge?
Looking into it, I can see loads of errors of the type
happening once every 10 seconds, so there may be a problem there.
"Note that the processes CoilsSb* were restarted around noon (see gray bands in figure 4) and the logs of these processes show a lot of error messages (python errors and ModBus errors), , to be checked if this is the source of the issue"
If you check the log file in the past months you will see the same error, in spite of the reported error the actions requested to the relays are always completed correctly. This is a known problem for wich a dedicated merge request on PySb is pending:
https://git.ligo.org/virgo/virgoapp/PySb/-/merge_requests/3
The proper solution would be the update of the relay box firmware but we preferred not to do it during the run.
We have survived like this so far. We could decide to implement the merge request and proceed in silencing the error if becoming too misleading like in this case.
Profiting of the break, this morning we wanted to study the problem of the unlocks on the IMC caused by the flip mirror installed into the green generation box of the CEB.
At first, we reconnected the original flip mirror mount to reproduce the problems. Strangely enough, we couldn't in any way cause an unlock of the IMC (neither with the mount installed inside or outside the box).
One possible explanation is that the unlocks were due to some electrical interferences with the fiber EOM box, and that, after the test we did with it in the last months, we solved some ground shorcuts/electrical anomalies.
For the time being, we left the flip mirror installed in the box and connected. We also re-engaged the automatic flip-up of the mirror after the lock of the CITF in order to see if the problem appears again (and in case to better debug it) in the metatron ARMS_LOCK.py, line #1545.
Due to the use of a shaping filter in the coil driver boards of NE and WE, the injected voltage values differ from the settings. The injection values can be retrieve looking at the specific voltage channel of the coil.
Using the GPS of the injections, here are the values of the voltages as seen by the V1:Sc_XX_MIR_VOUT_YY (with XX: WE, WI, NE, NI and YY: DL, DR, UL, UR) channels:
Mirror | Coil | Vinj (V) |
---|---|---|
NE | DL | 0.194 |
NE | DR | 0.195 |
NE | UL | 0.205 |
NE | UR | 0.202 |
NI | DL | 0.316 |
NI | DR | 0.316 |
NI | UL | 0.314 |
NI | UR | 0.317 |
WE | DL | 0.196 |
WE | DR | 0.195 |
WE | UL | 0.193 |
WE | UR | 0.192 |
WI | DL | 0.316 |
WI | DR | 0.314 |
WI | UL | 0.316 |
WI | UR | 0.316 |
Hrec has two actuator models for each mirror NE,WE,BS: one model for LN1, one model for LN2.
When the flag V1:SAT_NE_LN2_P2 is 1, Hrec uses the LN2 model for NE
When the flag V1:SAT_WE_LN2_P2 is 1, Hrec uses the LN2 model for WE
When the flag V1:SAT_BS_LN2_P2 is 1, Hrec uses the LN2 model for BS
If those flags are missing, Hrec considers by default that ITF is in LN2.
If those flags are at 0, Hrec considers that we are in LN1.
Since the relock of this afternoon, the ITF was in LN2 but the flags were kept at 0, thus Hrec was using LN1 models for NE,WE,BS actuators.
That's why the sensitivity was more noisy (BNS range around 48 Mpc instead of 55 Mpc) and the adjustment of optical response was wrong (DCP frequency around 260 Hz instead of 180 Hz).
Waiting for the SAT flags to be put back to the correct values of 1, I have restarted Hrec arounf 21h45 UTC after having commented the lines SWITCH_LN2 and SET_ACTUATOR in the configuration Hrec_actuators.cfg (by default Hrec considers that ITF is in LN2). So, we are back to 55 Mpc.
Next time suspension's people wants to do an April fool with the coil drivers flags, they should tell us before the end of the day. We loose 6 hours in finding the origin of the wrong h(t) reconstruction.
After the maintenance yesterday, around 18h LT Michal realized that there was something strange in the reconstructed h(t), in particular the cavity pole was around 260 Hz instead of 180 Hz. Also the optical gains were not as usual (see figures 1 and 2). On the contrary, the cavity pole estimated by ISC was still ok (see last figure).
After a lot of investigations, and some injections for measurements of the optical responses and for actuator responses, we found that the issue comes from the fact that the channels SAT_*_LN_P1 were all at 0, instead of being at 1, for BS, NE and WE (see figure 3).
We understood that the suspensions were in fact properly switched in their correct modes (by Metatron), but that the monitoring channels giving their modes were not correct. Hence Hrec was using the actuator models in HP for BS and LN1 for NE and WE, instead of LN1 for BS and LN2 for NE and WE. The use of incorrect actuator models directly impacts the estimation of the optical responses.
Note that the processes CoilsSb* were restarted around noon (see gray bands in figure 4) and the logs of these processes show a lot of error messages (python errors and ModBus errors), , to be checked if this is the source of the issue, or something else in the DSPs. We called Valerio Boschi around 23h30 LT about this issue, to be checked tomorrow morning.
Around 23h45 LT, we restarted Hrec by temporarily forcing the mode of the suspensions to their real mode, without using the SAT monitoring channels. The optical responses are now back to nominal values (see very last part of figure 5).
_____________________
Around 23h50 LT (21h50 UTC), we finally started the injections planned for this evening/night: injections of lines at high frequency (range 1 kHz to 8 kHz) with the NE and WE PCal, to re-estimate the mechanical response of the PCal due to excitation of mirror internal (drum) modes. (but we have skipped the injections for check hrec).
--> Injections should run to around 9h LT tomorrow morning.
___________________
To trigger this issue quickly next time, a flag must be added in the DMS to check that the suspensions are in the correct mode depending on the lock state.
"Note that the processes CoilsSb* were restarted around noon (see gray bands in figure 4) and the logs of these processes show a lot of error messages (python errors and ModBus errors), , to be checked if this is the source of the issue"
If you check the log file in the past months you will see the same error, in spite of the reported error the actions requested to the relays are always completed correctly. This is a known problem for wich a dedicated merge request on PySb is pending:
https://git.ligo.org/virgo/virgoapp/PySb/-/merge_requests/3
The proper solution would be the update of the relay box firmware but we preferred not to do it during the run.
We have survived like this so far. We could decide to implement the merge request and proceed in silencing the error if becoming too misleading like in this case.
As Franco explained, the CoilsSb* issue is long standing and I agree it can be misleading.
However, the actuation for NE/WE/BS was correct, as we have checked during the evening, so that means that the relays were set correctly.
The same can be said for the gain values inside the DSPs, as without them the actuation would have been wrong, and probably we would have unlocked given the quite big difference between the LN1 and LN2 actuators configurations (more than a factor 10 if I recall correcly).
I can infer then that the issue was that the values of the gains were correct, but the update was not sent to the DAQ, so maybe the hiccup was at the level of SusDAQBridge?
Looking into it, I can see loads of errors of the type
happening once every 10 seconds, so there may be a problem there.
The Thread__init message appeared the first time at 2025-04-01-06h17m27-UTC, Below the details of the SusDAQBridge server
2025-03-30-08h32m58-UTC>ERROR..-worker: Sa_MC - F0_DC_ENBL: Got NaN. Dsp #42103 status: EXIT_DSP_NOTCONNECTED
2025-03-30-16h32m40-UTC>ERROR..-worker: Sa_WE - ACC1_Noise: Got NaN. Dsp #31013 status: EXIT_DSP_NOTCONNECTED
2025-04-01-06h17m27-UTC>ERROR..-Thread.__init__() not called
2025-04-01-06h17m37-UTC>ERROR..-Thread.__init__() not called
The SusDAQBridge process was restarted correctly (a reload was not enough) and we could check online that the SAT gains were being updated again both as INFO in the VPM and online through dataDisplay.
Today, I found the ITF status in BAD_WEATHER.
15:00 UTC Commissioni break started.
During the shift, the lock was unstable; we managed to achieve a stable lock at 15:36 UTC. Around 16:00 UTC, the scheduled activities for the WEB acoustic injection started and ended around 16:50 UTC. After that, we needed to begin the scheduled calibration, but we encountered a problem with Hrec. M. Was, D. Bersanetti, A. Masserot, D. Verkindt, L. Rolland, and C. Grimaud started to investigate the issue. At 20:22, L. Rolland and C. Grimaud began the calibration. The activities are still in progress.
I'm not sure ENV_NOISE_MAG_WEB was connected to DAC ch7
I think it was ch 6 instead - To Be Check
We measure the acoustic coupling at WEB before the interventions which will occurr during the commissioning break.
Note: during this measurement the hrec calibration is wrong about -20%
Setup: two amplified loudspeakers on North side, one in front of racks, one in front of cryotrap. Figure 1.
Injected noise: 8-2000Hz, level to dac is 0.004, used ENV_NOISE_MAG_WEB connected to DAC ch7
Times are in the attached log file.
Figures 2 and 3 show the observed effect in environmental sensors, B8 and Hrec. The rised noise structure look similar to those observed in January (elog 66031).
Figure 6 An extra noise in hrec is produced at at low frequency not coherent with the injected noise.
As noticed before the shape of the structures depend on the level of microseism. Figures 4 and 5 record the wind and seismic noise at the time of this measurement.
I'm not sure ENV_NOISE_MAG_WEB was connected to DAC ch7
I think it was ch 6 instead - To Be Check
Today I changed the Autoscience functionality of the ITF_CONDITIONS node: now it will ask to go to DQSTUDIES instead of SCIENCE, which will be the default during the Commissioning Break whenever we'll reach LOW_NOISE_3 and just take data.
DQSTUDIES was also added to the list of states from which the Automation will allow the automatic transition to SCIENCE (when we'll resume it, now it will default to itself).
This morning, the tuning of the polynomials used to drive the DAC outputs has been done by measuring in closed loop the DAC voltage according the requested frequency, from 10Hz to 60Hz by a step of 10Hz.
Everything went well until the 60Hz frequency was requested, for this frequency the WEN rotor started to have some issues to extract the frequency .
The parameters were updated . The WWN rotor remains in operation while the WEN rotor has been stopped
This morning, the demodulation noises mitigation has been setup on the B1_PD2_56MHz
For the B1p_PD1_56MHz only the demodulation phase noise mitigation is done using the B1s_PD1_112MHz phase signal
The B1_PD2_56MHz and the B1p_PD1_56MHz demodulation noises mitigation are engaged at at LOCKED_DC_READOUT and disengaged at DOWN
ITF found in Science mode with Autoscience on.
At 6:00 UTC ITF in Maintenance mode, below the list of the activity communicated in control room:
CH [W] | INNER DAS [W] | OUTER DAS [W] | |
W | 0.285 | 0.035 | 0.265 |
N | 0.66 | 0.055 | 0.615 |
All the activities concluded at around 10:00 UTC; after Maintenance I set Bad_weather because the wind activity was very high; we could lock from 12:00 UTC to 12:14 UTC.
On March 18th PDU DPDT switches were installed on both NE (as reported in entry #66399) and MC. While NE setup has been tested on the 18th, MC devices have been tested successfully this morning.
Today, I found ITF locked in CARM_NULL_1F. 13:37 UTC ITF back in SCIENCE with Autorelock.
At 15:00 UTC, scheduled Noise Injections and Calibration began. Here is the list of activities:
The CALIBRATED_DF_PCAL process was skipped due to insufficient time remaining.
18:40 UTF CALIBRATION ended
18:41 ITF back in SCIENCE
Guard Tour (UTC)
17:50 - 18:20
20:30 - 21:00
After some investigation, we found that the Tolm packets sent by the Acl tasks are not transmitted in the same order as the ACL tasks are executed:
As reminder, Monitoring information about emission and transmission times is available for each packet exchange using the TOLM-v2 format, meaning that today these informations are not available for the Tolm packets sent to the suspension.
For the ISC_rtpc(rtpc20),
Today, the 2025-03-31 at 12h21m-UTC, as trial to recover the expected Tolm packet transmission order ,
ITF found in LOW_NOISE_3 in SCIENCE mode (AUTOSCIENCE_ON).
Unfortunately the ITF was a bit unstable. We collected 3 unlocks events:
Ego truck passages around CEB (UTC):
07:55 - 08:45
SBE
08:42 UTC - SQB1 Position loop opened (TBC). Properly closed via VPM.