Today, I found ITF in SCIENCE.
16:30 UTC: COMMISSIONING activities started.
16:30-16:52 UTC: I performed the scheduled ISC injections (SSFS).
16:54 UTC: CALIBRATION activities started.
16:54-17:06 UTC: CALIBRATED_DELAY.
17:08-17:51 UTC: CALIBRATED_PRWI.
17:52 UTC: I attempted to perform the CALIBRATED_DF_INtoEND_LN, but for some reason, we started to lose data. After conducting an investigation, I called the expert, A. Masserot. He found that to resolve the issue, he had to act on SUSP_Fb, but we are still unsure about what caused the problem.
20.01-20:35 UTC: CALIBRATED_DF_INtoEND_LN
21:07 UTC: ITF back in SCIENCE
ITF found in Science mode.
ITF unlocked at 8:15 UTC; relocked at the first attempt. Science mode set at 9:12 UTC.
SBE
Taking advantage of the unlock at 8:15 UTC I recovered SNEB vertical position with step motors.
The 2 first plots shows that there is a lot of glicthes on the NARM_CEB signals at 158MHz , since the maintenance period
The last plot show the evolution, from 2024-08 to 2025-01-22, of the NARM_BEAT_{DC,157MHz_mag} and of the WARM_BEAT_{DC,154MHz} signals
Here are some images taken during the Maintenance on January 14th (logbook 65974), with PCal laser on and off. In these images, there is not the main ITF beam.
The spots from the PCal benches seen by the NE and WE cameras are clearly visible, and are not related to the big spots discussed in the initial logbook entry.
There is also a small spot visible which must be the scattering from the main PCal beam: around (4000,2500) for NE and around (4500,3000) for WE. The spot looks stronger on WE, could be hint of more scattering from WE, but the camera calibration is not known.
Today only the numerical noise computation is performed, none mitigation of this noise is done .
The plot compares the channels related to the photodiodes, the ones related to the filter computation and the filtering numerical noise with (purple) and without(blue) PCAL lines .
From this plot, the numerical noise (DigitalNoise_corr) remains at the same level with or without the PCAL lines, meaning that the observed extra noise is not due to the filters computation
Figure 2025-01-21_EDB_scheme.png is the scheme of the EDB optical bench. The QD2 photodiode (circled in red) was replaced by the QD4 photodiode circled in blue.
ITF found in LCARM_NULL_1F in PREPARE_SCIENCE mode.
SCIENCE mode acheived at 22:37 UTC. It remained lock for the whole shift.
Guard tour (UTC):
18:50 - 19:25
20:30 - 21:10
22:10 - 22:35
00:20 - 00:56
04:00 - 04:35
SBE
SBE_SNEB_ACT_F0V_raw_50Hz out of DMS threshold (fig3).
Experts informed.
Figure 2025-01-21_EDB_scheme.png is the scheme of the EDB optical bench. The QD2 photodiode (circled in red) was replaced by the QD4 photodiode circled in blue.
The objective of today's campaing is investigating the 123 Hz bump . We performed the following actions:
In view of the installation a new security guardian for the neoVAN head, today we installed a new PT100 temperature probe (see fig. 1-2) in parallel to the existing one (ENV_PSL_AMP_TE), that is directly connected to the guardian (fig. 3) and will be used as trigger for the interlock.
For the time being, the guardian is in acquisition mode only (no interlock signal will be sent) as we want to check the robustness of the new probe for some days.
We also added two new channels in order to monitor the guardian data:
which monitors the temperature seen by the new probe (values to be divided /10 for the moment)
which monitors the trigger of the guardian:
1 = normal,
2 = pre alarm,
3 = alarm – neoVAN OFF
To recover the correct ALS signals for ALS lock, the SWEB and SNEB LC TX and TY set points have been restored with the latest values before the restart of the server .
Looking at the measurement made during the last maitenance we realised that when the PCal permanent lines were on, the noise level of the photodiodes was increasing compare to when the lines were off (see plot 1 were the reference plot is without permanent lines compare to with permanent lines with standard amplitude)
We did some more test to see were this is coming from on WE during maintenance today.
1/ Testing line injection with CALNoise
2/ Testing lines injection with PCal_Fast
First we acted on WE PCal lines using CALNoise, we switched the lines off and then tried to switch back on one line at a time to see which one were causing the noise to rise.
WE PCal has 3 permanent lines at the following frequencies and amplitude :
- 36.28 at 1.5e-3
- 38.5 at 2.0e-3
- 994.5 at 200.0e-3
The amplitudes were choosen in order to get the same SNR on all injected lines.
When putting one line at a time it appeared that the issue was coming from the 994.5 Hz line only (see plot 2 with reference plot without lines compare to when we inject only the 994Hz line).
We wanted to see the threshold in amplitude were the noise starts to increase. To test this we changed the amplitude of the 994.5Hz line starting at 2.0e-3 up to 37e-3 were we started to see the noise increase.
See plot 3 where we compare the noise without injection to the one where the 994Hz line is injected at 37e-3 amplitude.
Something intersting to note is that the noise level of Tx_PD1 and Tx_PD2 is higher for the 37e-3 amplitude than it was when injecting at 200e-3 and also when injecting all the lines at normal amplitude. It is not clear why.
We did the same test for the 38.5 Hz line to see if we would see the same effect by increasing the amplitude and the same thing append around ~30e-3.
See plot 4 comparing without lines to injecting the 38.5 Hz line at 37e-3.
Timestamps (For 60 s each time)
One of the hypothesis on this is that it could be caused by communication issue with CALNoise. To test this we tried injecting the same lines but using the PCal _fast process directly.
Fisrt we tried to inject the 38.5Hz line at 2.0e-3 amplitude to check that it was working but accidentelly injected a noise at 10e-3. After this we were able to do the other injections but we noticed that the noise level of Tx_PD1, in particular, was higher without the calibration lines than it was before (see plot 5 where we compare the noise without noise injection that was acquired at 8h15'15'' at the beginning of this test, to the noise acquired at 9h32').
Then we tried to see if we could reproduce the same noise increase seen in the previous test with CalNoise. First we tried with the 38.5Hz line and found that the amplitude needed to get the noise increase was more around 50e-3 rather than the 37e-3 seen in the previous test (see plot 6 for the injection of 38.5Hz line at 50e-3). We did the same thing with the 994.5Hz line and found that the noise increase was seen around an amplitude of 30e-3 (see plot 7).
We also did another inejction at low frequency at 4.5Hz at 30e-3 of amplitude to see how it affected the noise level. In plot 8 we can see that it increase the noise in the same way than with the 38.5 and 994.5Hz lines.
To conclude it does not appear that this increase in noise is coming from using CAlNoise as using directly PCal_fast is not improving the noise level.
Timestamps (For 60 s each time)
ITF found in LOW_NOISE_3 and in Science Mode. At 7:00 UTC I set the ITF to Maintenance Mode, here the list of activities communicated to the Control Room so far:
- Cleaning of Scientific Area (External firm managed by Ciardelli)
- At around 7:50 UTC, Checks on TCS Chillers (Ciardelli)
- From 7:35 UTC to 10:30 UTC, Acl Update on rtpc8/9 (Masserot)
- From 7:35 UTC to 9:50 UTC, Check on WE Pcal noise (Rolland)
- From 8:00 UTC from 9:10 UTC, Check of TE/HU regolation in DetLab (Soldani)
- Elevators safety checks (External firm managed by Fabozzi)
- From 8:20 UTC to 10:20 UTC, New temperature probe on NeoVan's head (Montanari, Sposito, Cavalieri, Spinicelli)
- From 8:25 UTC to 9:05 UTC, realignment of CEB ALS Pickoff (De Rossi, Melo)
- From 9:40 UTC to 12:30 UTC, B1s quadrant check (Zendri, Lagabbe)
To allow the B1s quadrants check, from 9:38 UTC to 12:30 UTC the ITF was set in Single Bounce NI configuration with DET_MAIN Shutter open. From 12:30 UTC I started to relock, while there was no action needed to recover the Infrared cavities, it was soon visible that the WARM ALS was not locking anymore. Investigation on the issue in progress.
TCS
Since yesterday morning at 11:53 UTC the WI Chiller has been automatically replaced by the Backup one. This morning Ciardelli verified the status of the cooling units and, after checking their water levels and cleaned thei floats, he restored the system at around 7:55 UTC. It was kept under observation during the maintenance, but at 13:06 UTC the guardian swapped again the WI chiller with the backup. Investigation in progress.
DET
From 7:30 UTC - 5 minutes of OMC Lock
From 7:38 UTC to 7:59 UTC - OMC Scan
TCS Power Checks
CH [W] | INNER DAS [W] | OUTER DAS [W] | ||
W | on the pickoff | 0.281 | 0.025 | 0.260 |
N | on the pickoff | 0.622 | 0.053 | 0.602 |
From the UPV results, I couldn't find correlations with the (safe) SDB2_B1_* channels. The channels tested with UPV+VetoPerf are listed in the attached TXT file. However, I explored this possible relationship using a different approach, which I will describe in detail below.
To complement the primary visual identification of glitch families described in #66010, I applied clustering algorithms to the glitch triggers identified by Omicron during this time frame. Specifically, I used the DBSCAN - Density-based spatial clustering of applications with noise algorithm, an unsupervised clustering method that groups data points based on their density in the parameter space of Omicron triggers.
For this analysis, I selected triggers with SNR > 10. The classification results are shown in the glitchgram in Figure 1, where different colors represent the glitch families identified by the algorithm. While the clustering is not perfect, and the choice of glitch features and hyperparameters was not optimized or cared in detail, the results align with the observations described in the original post. In particular:
This clustering analysis confirms the visual classifications in the initial post and provides additional insights into the structure and onset of the glitch families.
Focussing on the "purple glitch family," these are characterized by very large SNR and frequency at peak in the bucket. Therefore, to study the relation with the B1 saturation, I selected glitch triggers with SNR > 300 (and frequency at peak >90 Hz, although not necessary to further restrict the selection). The selected glitches are shown in the gitchgram in Figure 2. I compared the GPS times of these glitches with the times where the B1 saturation flag was active, represented by the vertical lines. Blue vertical lines correspond to times when this flag was active and fell within a delta_t of 2 seconds from the selected high-SNR glitches. Gray lines correspond to B1 saturations not corresponding to the previous glitches. The result doesn't change significantly by increasing the delta_t up to some tens of seconds (the lower limit is given by the resolution of trend channels).
Comparing blue and grey lines, using a minimum SNR threshold of 300, 68.59% of the high-SNR glitches have a flag nearby. These are the percentages obtained by changing the threshold:
SNR threshold | Coincidence [%] |
100 | 57.60 |
300 | 68.59 |
500 | 74.9 |
1000 | 70.32 |
In the central panel of the same image, I reported the hourly rate of high-SNR glitches and B1 saturations. There seems to be a clear correspondence between the two curves. These results support the hypothesis by Michal that high-SNR glitches are associated with saturations of the B1 photodiode.
In Figure 3 I report the detail of a day with many high-SNR glitches and many B1 saturations.
Concerning the PSL/INJ side, since the last SL Temp controller failure, the PSTAB corrections increased more then the usual trend of the last days (see fig.1).
This morning, we brought back the temperature of the amp head at its nominal one (since the beginning of the year), however, this didn't recover completely the AMP power (see fig. 2).
However, the PSTAB noise has been improved, the related glitches in Hrec should be reduced now.
The setpoint of the chiller has been lowered by one additional degree in order to bring back the neovan head around 25⁰
The Acl servers running on the SNEB's rtpc and the SWEB's rtpc have been restarted with the latest Acl release v1r35p17 .
The main motivations were
Operations performed
To recover the correct ALS signals for ALS lock, the SWEB and SNEB LC TX and TY set points have been restored with the latest values before the restart of the server .
This morning we swapped the Neovan chiller filter with a new one since the PMC_TRA power was slowly decreasing in the last week in correspondance with a temperature increase of the Neovan head. We also lowered the set point from 18 deg to 17 degrees.
The setpoint of the chiller has been lowered by one additional degree in order to bring back the neovan head around 25⁰
This morning we repotimized the injection in the pickoff fiber (see attached plot).
On the underlying infrastructure, we have had 3 correlated storage disconnections, that match the two events on 10th of January ( 2025-01-10-01h10m-UTC and 2025-01-10-04h44m-UTC ) and the first one on 15th ( 2025-01-15-04h15m-UTC ).
The other two have no corresponding events on the underlying infrastructure.
The problem is the same one that happened the last few times, due to the overload on the fileservers, in addition to the increased latency due to the new version of fault tolerance management on the underlying infrastructure.
The vendor has been informed, but seems to be impossible to improve, at the moment, on the infrastructure side, since all the necessary fine-tunings have been implemented.
The ITF kept SCIENCE mode all the shift.
Guard tour (UTC):
23:00 - 23:40
01:00 - 01:45
03:30 - 04:10