ITF found in Science mode.
At 21:10 ITF unlocked; relocked at the first attempt and Science mode set at 21:56 UTC.
Guard tours (time in UTC)
21:22-21:53; 23:16-23:47; 1:11-1:38; 3:17-3:48
ITF found in LOW_NOISE_3_SQZ and in SCIENCE mode.
ITF unlocked at 13:55 UTC (TBC), and relocked at 14:40 UTC at first attempt. SCIENCE mode set.
18:30 UTC - CALIBRATION mode set to perform the planned:
- check hrec
- inject_SSFS.py
SCIENCE mode set again at 19:04 UTC.
Guard tour (UTC):
17:17 - 17:47
19:17 - 19:47
DAQ
MirrorMakerLLO process crashed. I restarted it via VPM_LLA and via shell but it crashed again. Experts informed.
I found ITF in SCIENCE. At 10:58 UTC, ITF was unlocked, and I performed the pending reload of ITF_LOCK. At 12:10 UTC, ITF returned to SCIENCE.
Guard Tour (UTC):
05:00 - 05:29
06:30 - 06:56
11:00 - 11:27
ITF found in Science mode.
At 00:59 UTC the Squeezing disengaged; automatically reengaged.
Guard tours (time in UTC)
21:1-21:38; 23:00-23:30; 1:11-1:40; 3:20-3:50
Pending actionsOther
( - ) At the next unlock reload the ITF_LOCK node.
ITF found in LOW_NOISE_3_SQZ and in SCIENCE mode. Still locked.
Guard tour (UTC):
17:43 - 18:12
19:11 - 19:40
DAQ
16:42 UTC - FbTrend100s_100000 crashed / restarted
Other
( - ) At the next unlock reload the ITF_LOCK node.
I found ITF in SCIENCE. At 07:40 UTC, ITF was unlocked. The unlock was caused by the Neovan chiller flow completely stopping, which triggered the Neovan amplifier to enter protection mode. I called the INJ on-call, C. Derossi. She went to the EE room with S. Andreazzoli and F. Gherardini to resolve the issue. They changed the filter and replaced the chiller with spare units. At 10:00 UTC, ITF was back in SCIENCE and was unlocked again at 10:10 UTC. Finally, at 11:25 UTC, ITF was back in SCIENCE.
Oncall eventsISYS
(11-10-2024 07:40 - 11-10-2024 08:50) On site
Status: Ended
Description: Neovan chiller flow completely stopping, which triggered the Neovan amplifier to enter protection mode
Actions undertaken: C. Derossi changed the filter and replaced the chiller with a spare unit.
This morning the Neovan chiller flow completely stopped and the Neovan amplifier went in protection mode (the chiller is the one shown photo 2)
We found the filter very dirty (eventhough it was changed yesteday) and the pump which was broken. In fact, Fabio and Sergio dismounted the chiller and found some parts completely crumbled (photo 4 and 5), which can explain the fact that the filter became dirty in a few hours.
We replaced the chiller with the spare (photo 3).
The flow with the spare is again the nominal value (around 6).
ITF found in SCIENCE mode (LOW_NOISE_3_SQZ).
The ITF unlocked at 02:04 UTC. After two failed attempts (the first at LOCKED_CARM_MC_IR, the other at ACQUIRE_LOW_NOISE_3), it returned to SCIENCE mode at 03:54 UTC.
Guard tours (UTC):
- 21:11 - 21:43
- 23:02 - 23:31
- 01:12 - 01:44
- 03:02 - 03:29
ITF found in Science mode.
At 13:35 UTC ITF in Trobleshooting mode for a problem Neovan chiller.
At 15:55 UTC we started to relock the ITF.
At 17:50 UTC ITF in Commissioning mode for the planned BS TX 400 mHz and FBW engagement investigation.
At 20:00 UTC ITF in Science mode.
Automation
ITF_LOCK reloaded
Guard tours (time in UTC)
17:12-17:42; 19:12-19:42
Today we wanted to keep working on the issue related to BS AA TX and the 400 mHz oscillation arising when we switch to full bandwidth control mode.
Unfortunately the activity was a bit delayed due to the issue of the Neovan chiller (https://logbook.virgo-gw.eu/virgo/?r=65298). When the problem was solved we started again the lock acquisition. During the first period of the shift we unlocked here and there, probably due to the bad weather activity. After a while the robustness of the lock acquisition was recovered.
We wanted to go to LN3 with the BS TX in drift control, and eventually perform several test on the alignment loops both for BS and for the DET bench.
However, we didn't have the chance to perform any of the test we planned due to several unlocks while we were going towards Low-Noise 3.
One unlock occurred after the arise of a 250 mHz oscillation visible in the B1 PD1 audio channel, in correspondence of its saturation.
We decided to perform the subsequent lock by putting the engagement of the BS TX in fullbandwidth by default in the automation. To be noted that since yesterday we postponed its engagement at the beginning of LN3, so at the same time in which SR TY is misaligned to adjust the DCP.
The lock after if failed exactly after the engagement of FBW, due to the overcoming the safety threshold on B1s DC.
We noticed that in the initial transient of LN3 the fluctuation of B1s are usually higher, so we thought to post-pone even further the BS TX FBW engagement, after 90 second of SRTY misalignment, in a much calmer situation.
To do so we put in the automation a self.timer condition of 90 second starting from the beginning of SR TY misalignment. The modification worked and the FBW engagement was succesfull. It unlocked shortly after anyhow.
OPEN POINTS:
This afternoon at 13h20 UTC the Neovan chiller (for the electronics only) flow dropped from 6.5 to 1.7 (a.u.) (fig. 1).
We tried to recover the flow by changing the filter (which was very dirty) and reversing the in and out pipes at least three times. However, the flow never went more than 3.4.
Accidentally we switched off the Neovan by pressing the emergency button, so we profited to switch off also the chiller for a longer time and we tried it closed on itself: even in this case the flow didn't grew more than 3.4, so we concluded that it is the chiller pump or somethinf stuck inside, rather than the Neovan.
We left the system on since it seems stable even with a reduced flow (we changed the threshold for the DMS flag to be yellow instead of red). The temperatures of the pumping diodes of the Nevoan have increased but to an acceptable value (fig. 2).
This morning the Neovan chiller flow completely stopped and the Neovan amplifier went in protection mode (the chiller is the one shown photo 2)
We found the filter very dirty (eventhough it was changed yesteday) and the pump which was broken. In fact, Fabio and Sergio dismounted the chiller and found some parts completely crumbled (photo 4 and 5), which can explain the fact that the filter became dirty in a few hours.
We replaced the chiller with the spare (photo 3).
The flow with the spare is again the nominal value (around 6).
The FmTrend and FmRds servers are back on olserver114
ITF found in LOW_NOISE_3_SQZ and in SCIENCE mode. ITF still locked.
12:19 UTC - BAD_WEATHER mode set for one minute (BNS Range < 40 Mpc).
DAQ
12:50 UTC - MdVimSpectro, MdVim VPM processes restarted by Masserot.
Other
( - ) At the next unlock reload the ITF_LOCK node.
It is due to a NFS mount issue after moveing /data/archive ..."rds" and "trend" .
Now olserver114 and olserver119 are recovered:
measuring the time to access a directory :
ITF found in SCIENCE mode (LOW_NOISE_3_SQZ).
At 22:05 UTC, a notice for an event was received, see S241009em.
At 02:11 UTC, the squeezing disengaged. It restarted briefly later and the ITF was back in SCIENCE mode at 02:15 UTC.
Guard tours (UTC):
- 21:12 - 21:42
- 23:04 - 23:34
- 01:17 - 01:37
- 03:11 - 03:41
Pending actionsOther
( - ) At the next unlock reload the ITF_LOCK node.
ITF in Science mode for all the shift..
Guard tours (time in UTC)
17:07-17:35; 19:12-19:41
Pending actionsOther
( - ) At the next unlock reload the ITF_LOCK node.
The FmTrend and FmRds servers were running on the olserver114. They build the full FFL file related to their stream by scanning the online and offline storage areas. After the operations performed on the offline storage area yesterday , they became red at the VPM level even if their ffl file were updated. Note that the FmTrend_ll and FmRds_ll servers are running correctly as they scan only the online storage areas
Some investigations were made by
It appears clearly that there is some issue with the access to the archive data area from olserver114 and olserver119 .
As the olserver117 host appears to not be use, for the moment the FmTrend and the FmRds servers are now running on this host and seems to run without issue.
It is due to a NFS mount issue after moveing /data/archive ..."rds" and "trend" .
Now olserver114 and olserver119 are recovered:
measuring the time to access a directory :
The FmTrend and FmRds servers are back on olserver114
ITF found in LOW_NOISE_3_SQZ and in SCIENCE mode.
At 05:32 UTC ITF unlocked (TBC). ITF relocked at 08:13 UTC after several unsuccesfully locking attempts. SCIENCE mode set at 08:16 UTC.
08:48 UTC - S241009an
Sub-system reportsDAQ
FmTrend, FmRds VPM processes "red" but up. DAQ expert noticed that they are not crashed, they are still running and updated the ffl files. The troubles occurred after the yesterday computing's intervention (65282). Now the issue is related to the communication between the VPM and the processes...
On-line Computing & Storage
FmTrend, FmRds VPM processes "red" but up. DAQ expert noticed that they are not crashed, they are still running and updated the ffl files. The troubles occurred after the yesterday computing's intervention (65282). Now the issue is related to the communication between the VPM and the processes...
Yesterday afternoon we returned to WEB for the scheduled shift on scattered light, which could not take place because of ITF problems.
We profited to move the Bartignton magnetometer in a new location: we placed it on the floor of the SAS, approximately in the center and we kept the same orientation of the axies. The attached pictures show the installation. The good data are Oct 8 from 16:35 to 17:10 UTC. Same channel names. In this location the magnetometer is about the same distance from the cable trays in the SAS as the Metronix magnetometers are from the cable trays in the hall, while the Bartington is closer to the chillers: see the map in the third attachement. The map provided by Massimo shows in red the path of electrical cables.
ITF found in locking acquisition, it kept unlocking at various steps due to the strong wind activity. While relocking, at 14:00 UTC, I set the ITF in Commissioning mode to allow the ENV team to move the shaker at WE Building.
In the meantime, we were able to relock once at 15:40 UTC, but the lock lasted only until 16:00 UTC. We also reached ACQUIRE_LOW_NOISE_3 at 16:49 UTC but it unlocked. Due to the instability of the ITF, it was not possible to proceed with the ENV measurements.
ITF back in LOW_NOISE_3_SQZ at 17:32 UTC, after the ENV team left the WE building the ITF was put back in Science Mode at 17:51 UTC.
ITF left locked.
DAQ
Since around 8:30 UTC, the flags FmRds, FmTrend, Daq_olserver53..rds_ffl_latency are red. There's no clear consequences of this behavior. Experts informed of the issue.
Here are plotted the last ASC injections performed yesterday and last Saturday:
Fig.1: Saturday injections;
Fig.2: Monday injections;
with respect to the previous noise budget (21st Sept) situation didn't change (we plot only the region where coherence is above the threshold).
DIFFp TX and PR TX are the highests.
ITF found in Science mode.
At 6:00 UTC ITF in Maintenance mode, below the list of the activities comunicated in control room:
All the activities completed at 10:15 UTC with final checks performed by F. Nenci; at that time we started to relock the ITF.
At 11:21 UTC ITF in Bad_weather.
At around 12:30 UTC I contacted the ALS on-call because the green lasers could not stay locked; investigation in progress.
This morning Piernicola checked the status of the detection tower viewports at the south, south-west and south-east corners of the tower, in order to identify which viewport could be used to visualize the position of the CP ghost beams on the B1/B5 diaphragm.
The south viewport (pictures 1, 2, 3) allows to visualize the two holes of the B1/B5 diaphragm but most of the diaphragm surface is hidden by the white flat cables. Thus this viewport is not very useful.
At the south west viewport, there is no camera (contrary to what we were expecting from the telescreen configuration where there is a camera badly named "SDB1_FrontRight", ipcamz3). The view from this viewport is shown on Picture 4. There is about 2/3 of the surface of the B1/B5 diaphragm that can be observed from this viewport by placing our eyes at the edge of the viewport (not very convenient to install a camera). The bottom left corner of the diaphragm is hidden by the meniscus lens mount.
Finally, at the south east corner (inside the detection lab), Piernicola has dismounted the faulty camera ipcamz2 (picture 5). The view from this viewport is shown on picture 6. Although the bottom right corner of the B1/B5 diaphragm is hidden, it is still the best viewport to observe the diaphragm. Piernicola has installed a new camera (ipcam59) on this viewport (picture 7). At the moment the new camera is not functionning due to what looks like an issue with DHCP handling, but Elian, from the EGO computing department, is working on it.
Piernicola also investigated the missing camera ipcamz3. He found that it is actually installed on the PR tower (see picture 8). This camera is also not answering the ping command, to be further investigated.
The apparently faulty camera ipcamz2 has been stored in the second floor LAPP storage room (picture 9).