Figure 1. Looking at the CARM sensor RFC 6MHz I, it has a noise with a shape of 1/sqrt(f) between 10Hz and 200Hz. On other signals (for example B2), we do know that this is demodulation amplitude noise, and that its amplitude is proportional to the signal amplitude and can be subtracted using the amplitude noise of a different RF frequency demodulated on the same photodiode. There is no coherence with RFC 12MHz mag, but this is likely because 6MHz I is in loop and changes sign several times per second which would destroy any coherence.
Figure 2 shows that there are times during the lock acquisition when the RFC 6MHz I signal has a large offset from zero
Figure 3 during those times RFC 6MHz I and RFC 12MHz mag are coherent, so one could be used to subtract noise from the other.
Figure 4 looking at the RFC RF spectrum, the 12MHz line is the highest, so it is the best candidate for amplitude noise subtraction, and in time domain it is has a roughly constant non-zero value.
This correlation with WE correction is very interesting.
Figure 1. Looking at spectra of the corrections, the WE correction have 10 times more noise below 0.1Hz than the NE corrections, both during this glitchy time, and at normal times. It is a noise that is added in the DSP, and is not present on the LSC signals as received by the DSP. It needs to be investigated what signal is being added. That signal added to WE is making the WE correction RMS clearly higher when looking in time domain compared to the NE correction.
ITF in Science mode for all the shift.
Guard tours (time in UTC)
21:16-21:46; 23:00-23:30; 1:00-1:30;3:00-3:30
ITF found in relocking and in COMMISSIONING mode.
At 15:49 UTC ITF relocked in a stable way at LN3_SQZ. End of activity. SCIENCE mode set.
16:18 UTC - 1 minute of ADJUSTING mode in order to allow Masserot to restart LockMoni process.
A drop to 0 of the BNS Range for one minute (plot1) at 18:41 UTC (fig. 2) due to a missing data (TBC).
Guard tour (UTC):
19:07 - 19:36
The Acl servers running on ISC_rtpc and the SSFS_rtpc were restarted with the latest Acl release v1r30p17 .
After these updates, the ITF was relocked successfully at LOW_NOISE_3_SQZ
The attached plot compare the LSC and ASC elapsed time according the Acl release : purple Acl v1r23p17 - blue Acl v1r30p17 for the transition upto CARM_NULL_1F: there is a small increase of the ASC elapsed time
In the .pptx file, the main improvements between the 2 Acl releases are shown
This morning we swapped the chiller used to cool the Neovan electronics (fig. 1), which is meant to be a spare, with another one, that TCS lent us (fig. 2 and 3). This chiller in fact, had been temporarly plugged on the Neovan amplifier electronics on the 11st of Oct (see entry #65302) but since then we had no spare anymore.
The new chiller seems to work fine (plot 4), eventhough the flow is smaller. The temperature is much better stabilized with this chiller.
We then plugged the spare chiller (the one in fig 1) on the main circuit with the flow reversed in order to eventually unblock small parts which could have been stuck causing the temperature increase of the neovan head in these last weeks. After few minutes we switched on again the usual chiller dedicated to the main circuit (fig 4). The overall effect is a slight decrease of the Neovan head temperature (fig.5).
Today, I found ITF in science mode. Maintenance started at 06:06 UTC.
Below is the list of maintenance activities communicated to the Control Room:
Here is the list of maintenance tasks I performed:
TCS Power Checks
CH [W] | INNER DAS [W] | OUTER DAS [W] | ||
W | on the pickoff | 0.295 | 0.025 | 0.265 |
N | on the pickoff | 0.645 | 0.06 | 0.624 |
LSC Injections of saturday 19th.
Fig.1: noise budget wrt DARM. In the region where coherence is above the selected threshold, CARM loop seems to be the most limiting.
Fig.2: noise budget wrt Hrec. Coherence is poor almost everywhere, probably due to the subtractions (?). However, same conclusions can be made concerning CARM loop.
Figure 1. Looking at the CARM sensor RFC 6MHz I, it has a noise with a shape of 1/sqrt(f) between 10Hz and 200Hz. On other signals (for example B2), we do know that this is demodulation amplitude noise, and that its amplitude is proportional to the signal amplitude and can be subtracted using the amplitude noise of a different RF frequency demodulated on the same photodiode. There is no coherence with RFC 12MHz mag, but this is likely because 6MHz I is in loop and changes sign several times per second which would destroy any coherence.
Figure 2 shows that there are times during the lock acquisition when the RFC 6MHz I signal has a large offset from zero
Figure 3 during those times RFC 6MHz I and RFC 12MHz mag are coherent, so one could be used to subtract noise from the other.
Figure 4 looking at the RFC RF spectrum, the 12MHz line is the highest, so it is the best candidate for amplitude noise subtraction, and in time domain it is has a roughly constant non-zero value.
The FmRawBack server has been stopped at 15h28m-UTC (red line in the following plots)
The attached plots shows
As new trial, the following servers were restarted
ITF found in Science mode; it kept Science mode for all the shift.
Guard tours (time in UTC)
21:16-21:46; 23:00-23:30; 1:00-1:30;3:00-3:30
ITF found in DOWN and in COMMISSIONING mode. "Input power change with IMC locked" activity still in progress.
ITF relocked at LN3_SQZ in a stable way.
After a BNS_Range glitch, at 17:45 UTC, I started the ENV Noise Injection. Unfortunately at 18:26 UTC ITF unlocked (TBC) before the end of the injections. (ENV expert informed). They will repeat the injections next time.
ITF relocked at LN_3_SQZ at first attempt. At 19:14 UTC I set CALIBRATION mode but I launched the first calibration only at 19:20 UTC. I waited the stabilization of the BNS Range.
- 19:20 UTC - CALIBRATED_DF_SENSITIVITY
- 20:18 UTC - CALIBRATED_DF_PCAL
SCIENCE mode set at 20:49 UTC
Guard tour:
17:37 - 18:07
19:16 - 19:45
As trial the FmRawBack server has been stopped at 2024-10-21-16h28-UTC
The DAQ glitches can be explain by the fact that since 4 October the input flux to the fs01 nfs fileserver has increased again after the pause of September.
We have understood that the FdIOServer thread that sends frames via Cm is the same that writes logs on /virgoLog (served by fs01) and therefore an overload of fs01 causes the whole thread to lag.
The increase of the flux (to some of the /virgoLog, /virgoData, /olusers or /virgoDev filesystems) happened again in correspondence of a restart of some virgo processes triggered by the unavailablity of the /data/archive area on 4 October.
From an analysis of the traffic on fs01 it turns out that the major writers are currently olserver119 and olserver114 in this order.
I see that on both server run some Fm processes which as far as I understand write on /virgoData , which stays indeed on fs01 , so explaining at least part of the overload.
It is not possible from the OS side to measure which of the processes on those 2 servers have a role because of the risk of impacting on the performances.
It should also be explained why the writing flux had been lower during all September.
The first attached plot show the latency
The red rectangle refers to a full lost of data occurred between the 2024-10-14-10h56m38s and the 2024-10-14-10h57m25s. This attached file .txt show for the TolmFrameBuilders and the Imaging servers the FdIO error report related to this event
After this event there is some jumps of latency (purple rectangle) on all the frame providers, TolmFrameBuilders, Slow frame buulders and Imaging servers (see the last attached plot)
The DAQ glitches can be explain by the fact that since 4 October the input flux to the fs01 nfs fileserver has increased again after the pause of September.
We have understood that the FdIOServer thread that sends frames via Cm is the same that writes logs on /virgoLog (served by fs01) and therefore an overload of fs01 causes the whole thread to lag.
The increase of the flux (to some of the /virgoLog, /virgoData, /olusers or /virgoDev filesystems) happened again in correspondence of a restart of some virgo processes triggered by the unavailablity of the /data/archive area on 4 October.
From an analysis of the traffic on fs01 it turns out that the major writers are currently olserver119 and olserver114 in this order.
I see that on both server run some Fm processes which as far as I understand write on /virgoData , which stays indeed on fs01 , so explaining at least part of the overload.
It is not possible from the OS side to measure which of the processes on those 2 servers have a role because of the risk of impacting on the performances.
It should also be explained why the writing flux had been lower during all September.
As trial the FmRawBack server has been stopped at 2024-10-21-16h28-UTC
The FmRawBack server has been stopped at 15h28m-UTC (red line in the following plots)
The attached plots shows
As new trial, the following servers were restarted
I found ITF in SCIENCE. At 06:39 UTC, ITF was unlocked. At 07:57 UTC, commissioning activities began:
Air Conditioning
There was a problem with the UTA at WEB (D. Soldani, R. Romboli, and S. Andreazzoli). They replenished the missing water and turned on the diesel heater. Further investigation will occur tomorrow during maintenance.
On Friday Oct 18 Piernicola installed a camera on the south-east view port on SDB1 located in the detection lab clean room. It is a camera with an adjustable objective, ipcamz3, and was put instead of ipcam59 (https://logbook.virgo-gw.eu/virgo/?r=65283) that was to wide angle to see scattered light on SDB1.
After adjusting the focus manually, and increasing the exposure time and removing the infra-red filter through the camera web interface scattered light could be seen clearly in CARM NULL 1F.
Figure 1 shows the bench with the light turned on.
Figure 2 shows the view in CARM NULL 1F. There are several beam spots visible on the B1/B5 diaphragm. One is above and to the right of the upper hole of the diaphragm, this is where one of the CP ghost beam is expected based on measurements from 2019 (https://logbook.virgo-gw.eu/virgo/?r=44446). However the symmetrical ghost beam above and to the left is not visible, instead there is a spot horizontally to the left of the upper hole. This could be the other CP ghost beam, but would mean the ghost beam has moved. Or it could be a bright pixel on the camera as there are many visible on the image.
There might also be the B5p visible under the bottom hole, but it is not clear as that point could also be the edge of the large diaphragm between the two parabolic mirrors M1 and M2.
ITF found in LOW_NOISE_3_SQZ and in Science Mode. It kept the lock for the whole shift.
ITF left locked.
Guard tour (UTC):
21:23 - 21:53
23:17 - 23:45
01:12 - 01:41
03:07 - 03:37
ITF found in Science mode
ITF unlocked at 17:53 UTC; it relocked at the first attempt and Science mode set 18:36 UTC..
ENV
High wind activity at the beginning of the shift; it slowly decreased.
Guard tours (time in UTC)
13:00-13:30; 15:00-15:30; 17:09-17:36; 19:17-19:47
ITF found in LOW_NOISE_3_SQZ and in SCIENCE mode.
At 05:29 UTC ITF unlocked and the Guardian of MC opened ID loops. The unlock and the shake of the suspension was due to the unexpected closing of the AC_TOWERMC..V51ST valve (in front of the TurboPump). The MC suspension position changed and when I tried to closed the loops the F0 corrections was very high so I contacted MSC expert to fix the problem.
TROUBLESHOOTING mode set.
Boschi worked from remote with motors to recover the correct position of Super Attenuator and, at 06:50 UTC, he fixed the position and the controls of MC suspension, so I started to check the status of Injection system. I discovered that it was not relocking anymore, so I contacted ISYS OnCall. Problem fixed from remote realigning IMC ty.
PREPARE_SCIENCE mode set.
ITF relocked at LN3_SQZ at first attempt at 08:53 UTC after realigning NE (about 2 urad in ty) and the usual cross-alignment in ACQUIRE_DRMI.
08:53 UTC - SCIENCE mode set.
At 09:56 UTC I set ADJUSTING mode in order to allow R. Macchia, came on site, to check the status of the valve in the MC building. After the check we decided to postpone the intervention on the valve for tuesday morning during the maintenance.
10:10 UTC - SCIENCE mode set.
Guard tour (UTC):
05:00 - 05:27
07:00 - 07:27
09:00 - 09:27
11:00 - 11:27
DAQ
05:30 UTC - PyPhaseCamB1p process crashed / restarted
EnvMoni
During the inspection in MC, Macchia noticed a very noisy fan in a black rack below the platform.
Other
Electronics:
During the inspection in MC, Macchia noticed a very noisy fan in a black rack below the platform.
ISYS
(20-10-2024 06:45 - 20-10-2024 07:45) From remote
Status: Ended
Description: IMC and RFC unlocked. It was not relocking anymore due to a drift of IMC position in ty.
SUSP
(20-10-2024 05:45 - 20-10-2024 06:45) From remote
Status: Ended
Description: The Guardian of MC opened ID loops.
Problems to close the loops
Around 7:30 LT, a valve of the MC tower pumping system (not along the beam) automatically closed for unknown reason.
The situation seems under control but the reason of closing is to be understood. In coordination with the control room, it is necessary to have a local visual inspection.
ITF found in LOW_NOISE_3_SQZ and in Science Mode. It kept the lock for the whole shift.
ITF left locked.
Guard Tour (UTC):
21:16 - 21:45
23:06 - 23:36
01:18 - 01:48
03:16 - 03:46
ITF found in Science mode.
From 18:30 UTC to 19:02 UTC ITF in Calibration mode for:
Software
DMSpublisher restarted because not working properly.
Guard tours (time in UTC)
13:!5-13:42; 15:00-15:26; 17:12-17:41; 19:16-19:45.
ITF found in LOW_NOISE_3_SQZ and in SCIENCE mode. ITF still locked.
Guard tour (UTC):
05:00 - 05:26
06:30 - 06:56
11:35 - 12:01