Upon reaching the site, I found the ITF in LOCKED_ARMS_BEAT_DRMI_1F, with Gouaty investigating on the B1p_QD1 issue. After the problem was solved (see report #63776) we attempted to relock, but due to the strong weather condition, it was not possible to reach CARM_NULL_1F stably. The wind intensity slowly decreased, but the ITF kept systematically unlocking at LOCKING_CARM_NULL_3F or LOCKING_CARM_NULL_1F. At 19:00 UTC I contacted the ISC Oncall (Mantovani) but, at the following attempt and without adjustments, we were able to reach LOW_NOISE_3 at 19:55 UTC. I tried to reach LOW_NOISE_3_SQZ, but SQZ_MAIN was stuck in SQZ_LOCKED_NO_FC.
At 20:42 I set the ITF Mode in CALIBRATION and started the planned Calibrations.
20:43 UTC - Measurement of actuators response for NE,WE actuators and NI,WI,BS marionnettes, interrupted at 21:05 UTC due to an unlock;
21:06 UTC - Measurement of the TF between CAL and Sc channels, completed;
21:15 UTC - Measurement of actuators response for PR and BS mirrors;
Calibrations in progress.
Guard Tour
18:06 UTC - 18:50 UTC
20:01 UTC - 20:31 UTC
ISC
(28-03-2024 19:00 - 28-03-2024 19:50) From remote
Status: Ended
Description: During the shift, the ITF kept unlocking systematically at LOCKING_CARM_NULL_3F or LOCKING_CARM_NULL_1F.
Actions undertaken: I contacted the ISC Oncall (Mantovani), the ITF locked without further input at the following attempt.
Looking at some ALS events of the 2024-03-27 night
A possible simple patch is to clip the DARM_FREQ signal in a given range when the signal is centered on zero .
This has been implemented in the CEB_ALS server and automated in the ARMS_LOCK metatron node
This plot show the trend of 2 lock sequences, before the automation of this patch
There is the glitches on the NArm_ BEAT_dFreq signal during these periods, none on the WArm_BEAT:
Let see if this patch will allow to improve the lock acquistion .
A possible improvement would be to clip the glitches at the Arms frequency channels (NArm_BEAT_dFreq and WArm_BEAT_dFreq) to reduce the glitches on the CARM and DARM frequency signals
After performing another -200 steps with B1p_M2_V we were able to maintain the galvo loops of B1p_QD1 engaged while the B5 drift control of SDB1 was engaged. It was than possible to reach CARM_NULL_1F (see attached figure).
The value of the vertical correction of B1p_QD1 galvo is still a bit large when we lock the CITF (around -7V). We will monitor this to understand if a further tuning is needed.
Using the PCal injected permanent lines, we have checked that h(t) has the correct sign, as defined in common by LIGO and Virgo.
The definition of the strain is : h = ( L_North - L_West ) / L0.
On NE, when PCal power increases, the signal PCAL_NE_Rx_PD1 increases.
Since the PCal injected line is at a frequency above the pendulum resonance (0.6 Hz) and since the PCal beam pushes the NE mirror from ITF beam side,
when PCal power increases, the arm length L_North decreases, thus the strain h decreases.
The phase of the TF between hoft_raw and PCAL_NE_Rx_PD1 must be around pi. That is what we observe on the attached plot.
Same deduction on WE mirror allows to conclude that when PCal WE power increases, PCal_WE_Rx_PD1 increases, L_West decreases, thus h increases.
The phase of the TF between hoft_raw and PCAL_WE_Rx_PD1 must be around 0. That's what we observe on the attached plot.
Upon arrival I found ITF in Science Mode (LN3_SQZ). Unfortunately ITF unlocked due to the bad weather conditions.
After several attempts ITF has been left locked on CITF for the whole shift and, according with Sorrentino, we decided to profit of this situation to fix the strenght of ALS Green with Casanueva and Van de Walle from remote.
During a relock attempt the B1p_QD1_Galvo remained opened. I tried few times to closed it but it reopened sistematically. I asked Gouaty to check on it. Troubleshooting in progress...
DET
(28-03-2024 11:00 - 28-03-2024 13:00) From remote
Status: Ended
Description: B1p_QD1_Galvo remained opened
This morning I was contacted by the operator because the galvo loops of B1p_QD1 were opening in LOCKED_ARMS_BEAT_DRMI_3F and it was not possible to close them back.
As shown on the attached figure, the corrections of the B1p_QD1 galvo are indeed much larger (in the negative direction) than what they were yesterday. So we suspect that the ITF is not in its standard alignment working point. This is also corroborated by the observation that the B1p beam on the B1p camera was quite off-centered (at the bottom of the camera).
Putting the SDB1 bench in fixed setpoints (when ITF was in DOWN), was enough to recenter the beam on the B1p camera. Then the galvo loops are engaging correctly when reaching LOCKED_ARMS_BEAT_DRMI_1F. But the problem reappears as soons as we reach LOCKED_ARMS_BEAT_DRMI_3F because of the engagement of the SDB1 B5 beam drift control. We disabled by hand the SDB1 B5 beam drift control and then it was possible to close the galvo loops of B1p_QD1 even in LOCKED_ARMS_BEAT_DRMI_3F. It was not possible to lock the ITF at further steps due to the high wind activity.
Since we suspect a wrong alignment of the ITF mirrors, we only made minor changes to the SDB2 QPD centering (B1p M2 V: -200, B5 M1 H: -200, B5 M2 V: -200). This is probably not yet enough to lock the ITF, so the proposed procedure will be to disable the SDB1 B5 drift control when arriving in LOCKED_ARMS_BEAT_DRMI_3F, restore the B1p QD1 galvo loops, and then try to go on with the lock acquisition up to the point the mirrors alignment loops are correcting the alignment. When it should be possible to restore the SDB1 drift control and then the floating setpoints.
After performing another -200 steps with B1p_M2_V we were able to maintain the galvo loops of B1p_QD1 engaged while the B5 drift control of SDB1 was engaged. It was than possible to reach CARM_NULL_1F (see attached figure).
The value of the vertical correction of B1p_QD1 galvo is still a bit large when we lock the CITF (around -7V). We will monitor this to understand if a further tuning is needed.
The following report has been submitted to the On-call interface.
On-call events -> Interferometer Sensing & Control
Title: ITF not locking
Author(s): mantovani
Called at: 22:45, 27-03-2024, by: Gherardini |
Remote intervention: Started: 22:45, 27-03-2024; Ended: 23:45, 27-03-2024 |
On-site intervention: Started: ; Ended: |
Status: Resolved |
Operator when issue resolved: Berni |
Details:
The interferometer was not locking and even if the weather condition were bad (but not clearly bad to obviously explain the fact that the lock could not be achieved) with Fabio and Francesco we decide to check a lock acquisition to spot other possibile issues.
There were no particular issues apart from the moving suspensions that made the lock not robust. We have locked by misaligning at carm 0 1f the SR ty of +0.8 urad to lower the fringe (since the fringe was really instable and we would like to avoid the shutter closing).
Fabio and Francesco proposed the idea we should find a certain level of wather condition for which we can say that the ITF lock could not work and so the on-call intervention is useless. I think this is a very reasonable request.
* Note that any files attached to this report are available in the On-call interface.
The shift started with Maddalena connected to investigate at the locking troubles; she verified that the ISC loops ere working properly and that the oscillations were due to the high sea activity and wind activity.
At 22:42 UTC we reached LN3 performing the "trick of the SR" in CARM_NULL_1F; then I started the calibrations:
At this point considering the difficulties in relocking, the wind was increasing, and also the fact that the two LIGO interferometers were already in Science I did not manually unlock the ITF to complete the calibration but instead I asked to go in LN3_SQZ to set then Science mode.
It was not possible to inject the Squeezing: the SQZ node was complaining about the 4Mhz signal (experts informed by mail) so I went back in LN3 and at 00:00 UTC I set Science mode; unfurtunately the ITF unlocked shortly after because of a sudden increase of the wind.
With the ITF in down I completed the calibartions foressen with the ITF in this state:
At this point I asked to relock; in the meantime Marco fixed the problem of the squeezing.
After one failed attempt LN3_SQZ was reached at 1:56 UTC, Science mode at 1:57 UTC; still in Science at the end of the shift.
This afternoon the commissioning work focused on the injection system (#63765, #63767), activity completed at 18:00UTC; ITF relocked up to LOW_NOISE_3_SQZ at the first attempt, it unlocked after after 12min most probably because of the work on NCAL; in the meantime Diego worked to cleanup the automation,
in the evening the lock become quite difficult with an increasing of wind and microseismic now we are still trying to relock...
ISC
- 20.19UTC: NI etalon set from 20.35 to 20.2 C (64800sec ramp time).
Today I continued the cleanup of the automation nodes; I removed several excessive logging issues, removed commented and outdated code, both old and backed up from the recent ISC cleanup. Also configuration files received a substantial cleanup.
The nodes of interest are: ALS_NARM, ALS_WARM, ARMS_LOCK, DRMI_LOCK, INJ_MAIN and ITF_LOCK (up to CARM_NULL_1F so far).
Several lock acquisitions have been performed already, and the clean up will continue on the remaining states of ITF_LOCK and the other ISC nodes left, and also TCS and SUS.
Unmentioned nodes will be left to the respective maintainers.
All the INJ_rtpc (rtpc19) Acl servers have been started with the Acl v1r23p17 between 2024-03-27-17h24m10-UTC and 2024-03-27-17h25m06-UTC
Summary of the first week of ER16.
Today we blocked the beam on the LB to allow HJ. Bulten to work on the control of EIB (details in another entry)
We took advantage of having INJ in down to remove the set up that we added few weeks ago with Walid to be able to inject laser phase noise. We were indeed suspecting that it could be the responsible for the loss of phase margin of the IMC loop that has been seen these last days.
When HJ finished, we try to relock the injection but it was not relocking. After multiple tests we eventually noticed that the BPC was locked on the reference values which were quite far from the ones it had before putting INJ in down.
The alignment of the whole system had indeed drifted away during the long lock of the night+day.to follow the ITF.
After having relocked we blocked the beam again to allow Alain to do the restart of rtpc19.
Then everyting relocked correctly.
We measured the OLTF of the IMC with the attenuation of 20 dB (10+10) we had an UGF of 100 kHz and 14 degrees of phase margin
We added few dB of attenuation 22 (20+2) and we had an UGF of 93 kHz with 16.5 degrees of phase margin (see figure).
It is better than what we had few days ago but still not at the level of what we add when we measured it on the 04/03.
We have analysed all the raw channels with units of position, speed and acceleration, transforming them to reproduce the frequencies of the arches. The results confirm the V1:SBE_EIB_GEO_* family as the channels that reproduce best the frequency of the arches in LSC_SRCL. Speed and acceleration channels are not good predictors of these arches.
In the attachments, three text files with the most correlated position channels (truncated to Pearson correlation values larger than 0.1), speed and acceleration channels (truncated to 0.04).
Analysing the calibration data from last night, we found that most of the injections failed. This was the consequence of two different modifications: 1/ the preparation of hardware injection, with a change in a channel name that was not fully propagated to the python scripts ; 2/ the improvement of the injection level for pcal to NI,WI measurements, adding a reference noise curve for "darkFringe_LN1" configuration, but forgetting a check of the config given as input in the script where we did not add this new name. As a consequence, some scripts were either not injecting the signal during the period of injection (check hrec, optical response, sensitivity measurements...), either failing directly when started (Pcal to IN configuration).
During the checks, one other modification has been done today around 14h45 UTC: change the name of the internal channel sent from CALnoise to the NE and WE PCal. CALnoise and the NE and WE Acl processes have been stopped and restarted to take this into account.
All the CALI_O4 python injection scripts have been quickly tested between 14h30 and 16h UTC today, and the lines or noise could be seen in the expected noise channels.
Upon reaching the site, I found the ITF in LOW_NOISE_3_SQZ and in Science Mode. The lock lasted until 13:14 UTC, where I manually unlocked to allow the planned INJ/SBE activity, carried out by Gosselin, De Rossi and Bulten. The beam was blocked on the Laser Bench at 13:18 UTC.
INJ activity still in progress.
ISYS
During the shift, the value of RFC_TRA_DC started to decrease. Experts informed of the issue.
As already happened a few weeks ago, #63420, associated with the bad weather conditions of this morning and the large microseismic noise, there was an increased rate of scattered light glitches, best witnessed by LSC_SRCL, and reproduced by the V1:SBE_EIB_GEO_H2_200Hz channel.
Figure 1: spectrogram of a scattered light glitch in LSC_DARM.
Figure 2: Top: spectrogram of several scattered light glitches in LSC_SRCL. Middle: extracted arch frequency and predicted frequency of the scatterer surface according to the equation , with the laser frequency, the speed of the scatterer, equals to the time derivative of SBE_EIB_GEO_H2_200Hz channel, and the number of times stray light gets reflected back and forth between the test mass and the scatterer before it joins the main beam arXiv:2007.14876 . Notice that the y-axis is the same for both the reconstructed frequency of the arches and the predicted frequency. In the box, the value of the Pearson correlation coefficient between the two quantities, 80%. Bottom: time series of SBE_EIB_GEO_H2_200Hz.
We are currently running an extended analysis to find other correlated sensors, using the list of position, speed, and acceleration channels documented in this git issue.
We have analysed all the raw channels with units of position, speed and acceleration, transforming them to reproduce the frequencies of the arches. The results confirm the V1:SBE_EIB_GEO_* family as the channels that reproduce best the frequency of the arches in LSC_SRCL. Speed and acceleration channels are not good predictors of these arches.
In the attachments, three text files with the most correlated position channels (truncated to Pearson correlation values larger than 0.1), speed and acceleration channels (truncated to 0.04).
Today, with the ITF in at least LOW_NOISE_3
Below the evolution of the RAW, RAW_FULL and RDS streams since the 2024-02-23 upto the 2024-03-27
All the Acl servers (AclISC, AclLC, AclSBE, AclFCoplev, Acl) are running with the v1r23p17 version since the 2024-03-26-15h20mn-UTC .
The main new features are
The first trial was done the 2024-03-25 starting at 14H-LT
All the INJ_rtpc (rtpc19) Acl servers have been started with the Acl v1r23p17 between 2024-03-27-17h24m10-UTC and 2024-03-27-17h25m06-UTC
[For some reason, this message has remained in the Drafts, even if I received the notification that it was added to the Logbook...]
Ignore my previous entry. The highlighted channel is one of those recently added to DET, as reported in this logbook entry: #63738. I'm adding them to the excluded channels from BruCo analysis:
/data/dev/detchar/online/bruco/share/virgo_excluded_channels_Hrec_hoft.txt
EDIT: BruCo daily results for Hrec (link) have correctly excluded this channel from the analysis. Unfortunately, I forgot to edit the corresponding list for DARM, and the coherence results for the last day (link) have been dominated by the new DET channel: ignore this channel from the BruCo results.
Now, I have edited the excluded channel list for DARM too. I've opened this git issue to keep track of the various excluded channel lists and keep them up to date.
ITF found in LN3_SQZ and commissioning activity SQB1 injections and sensitivity just finished thus I, as requested by Fiodor, I performed the weekly calibration.
22:06 UTC CALIBRATED_DF_SENSITIVITY: error at start,the reload the node canceled the error; completed 22:43 UTC.
22:45 UTC CALIBRATED_DF_PCAL: ITF unlocked as soon as I launched the calibration.
22:46 UTC CALIBRATED_SRNI: completed 23:07 UTC.
23:56 CALIBRATED_DF_INtoEND_LN: not clear if completed or failed. The node went in completed at 23:58 UTC.
0:04 ITF in LN3 but Hrec_Range_BNS was very low and it unlocked after few seconds.
0:05 CALIBRATED_DELAY: completed.
00:57 ITF in LN3.
00:58 UTC CALIBRATED_DF_PCAL: completed at 1:32 UTC.
1:33 UTC CALIBRATED_DF_SENSITIVITY: completed at 2:10 UTC
At 2:10 UTC stop of Calibration and start Science mode.
At 2:45 UTC an earthquake unlocked the ITF, the suspensions were excited for about an hour.
Then the ITF kept unlocking CARM/DARM to IR for about 1.5h because green laser very glitchy (as already happened in the last days) .
Finally at 5:24 UTC the ITF was back in Science mode.
Analysing the calibration data from last night, we found that most of the injections failed. This was the consequence of two different modifications: 1/ the preparation of hardware injection, with a change in a channel name that was not fully propagated to the python scripts ; 2/ the improvement of the injection level for pcal to NI,WI measurements, adding a reference noise curve for "darkFringe_LN1" configuration, but forgetting a check of the config given as input in the script where we did not add this new name. As a consequence, some scripts were either not injecting the signal during the period of injection (check hrec, optical response, sensitivity measurements...), either failing directly when started (Pcal to IN configuration).
During the checks, one other modification has been done today around 14h45 UTC: change the name of the internal channel sent from CALnoise to the NE and WE PCal. CALnoise and the NE and WE Acl processes have been stopped and restarted to take this into account.
All the CALI_O4 python injection scripts have been quickly tested between 14h30 and 16h UTC today, and the lines or noise could be seen in the expected noise channels.
The large broadband noise up to some hundred Hz is highly coherent between Hrec and V1:DET_B1_DC, as a result of a dedicated BruCo run: link (I'm currently rerunning the script to fix the cropped y-scale). In the table, you can find also a bunch of channels coherent with the large peak at 43 Hz.
Figure 1: coherence and noise projection of V1:DET_B1_DC to Hrec.