Reports of 61775
Virgo Runs (O4c)
tomelleri - 23:00 Saturday 23 August 2025 (67542) Print this report
Operator Report - Afternoon shift

ITF found locked in LOW_NOISE_3 with SCIENCE mode (Autoscience ON); BNS Range ~55 Mpc. ITF in CALIBRATION mode at 19:30UTC for daily calibration and LSC noise injection planned for today. back in SCIENCE at 20:13UTC (with Autoscience engaged). ITF left locked.

Guard tour (UTC)
12:50 -> 13:29
16:00 -> 16:40
18:40 -> 19:21

Sub-system reports

Calibration
(times in UTC)
19:30 - 19:49 CALIBRATED_DF_DAILY (a.k.a. check hrec) took twice as long as expected (10').
In order to fit within the 1/2 hour calibration slot I did not wait for a glitch on hoft, and instead performed lsc injection as soon as possible.
19:50 - 20:09 run inject_lsc.py

Images attached to this report
Virgo Runs (O4c)
amagazzu - 14:58 Saturday 23 August 2025 (67540) Print this report
Operator Report - Morning shift

ITF found in LOW_NOISE_3 and in Science Mode. It kept the lock for the whole shift.
ITF left locked.

Guard Tour (UTC)
5:24 - 6:05 
8:14 - 8:53
10:32 - 11:12 
12:50 -

Images attached to this report
Virgo Runs (O4c)
gherardini - 6:58 Saturday 23 August 2025 (67539) Print this report
Operator Report - Night shift
The ITF kept science mode all the shift.

- guard tours(UTC):
21:10 --> 22:00
23:20 --> 0:00
1:30 --> 2:10
3:30 --> 4:10
Images attached to this report
Virgo Runs (O4c)
tomelleri - 22:54 Friday 22 August 2025 (67537) Print this report
Operator Report - Afternoon shift

ITF remained locked in LOW_NOISE_3 with SCIENCE mode (Autoscience ON); BNS Range unstable, oscillating between 46-55 Mpc due to glitches.

Guard tour (UTC)
17:56 -> 18:33

Sub-system reports

ISYS
Red flag on DMS: RFC>mean(INJ_RFC_TRA_DC_50Hz,10). Low transmitted power through RFC due to SIB1 misalignment, awaiting expert intervention.

Images attached to this report
On-call intervention (General)
carbognani, seder - 16:10 Friday 22 August 2025 (67538) Print this report
Comment to On-call intervention (67536)

Among the external network connections also the Low Latency Data Transfer via Kafka was also effected, resulting in Virgo data  missing on CIT side.

Predecessors of the problems were reported by the Iciga2 monitoring at the top level by flagging the Virgo Low Latency machines as not pingable: 

lowlatency-virgo is DOWN

Host check output:

PING CRITICAL - Packet loss = 80%, RTA = 8635.61 ms

Notification type: PROBLEM
Date time: Thu Aug 21 20:18:11 2025 UTC

Then the process in the LowLatencyAnalysis VPM dedicated to the monitoring of the Cascina -> CIT link (V1KafkaCITIn) reported accurately such data loss:

2025-08-22-01h23m36-UTC>WARNING-Miss 11408 seconds between 1439849472 and 1439860880
2025-08-22-01h23m36-UTC>INFO...-CfgReachState> Active(Active) Ok

At the moment, V1KafkaCITIn report those errors as warning (tipically we can have few seconds interruptions that get managed by the internal Kafka mechanism), we intend to modify the process so it goes in error state (and trigger DMS notifications for better monitoring) in case the interruption is lasti more than a certain time (to be put as a parameter for the process) 

To be noted that the data loss also happened for the incoming data but only for the LLO link, as reported by the writing process L1KafkaCasIn

2025-08-22 01h21m13 UTC FdIOGetFrame: miss 11413 seconds between 1439849472.0 and 1439860885.0

2025-08-22 01h21m13 UTC Input frames are back; gps=1439860885 latency=6.6

The LHO link was not effected. This may be due to the complexity of the network outage generated by the firewall.

Virgo Runs (O4c)
menzione - 14:54 Friday 22 August 2025 (67535) Print this report
Operator Report - Morning shift

ITF found in LOCKED_ARMS in EARTHQUAKE mode.
I started to relock but ITF unlocks sistematically at LOCKING_CARM_NULL_3F. Experts at work.
At Mantovani suggestion I moved SR about 1 urad ty+ during LOCKING_ARMS_BEAT_DRMI_3F. It solved the problem and the lock acquisition was able to continue.
Probably the TCS is not compensating well. TCS expert checked the status and will fix it in case of difficults to relock.

ITF back in SCIENCE mode at 08:49 UTC. AUTOSCIENCE enabled.

Sub-system reports

DET

During all relocking attempts I had to re-opened shutter and re-enable vbias for B1p_QD1, B1p_QD2, B5_QD1, B5_QD2.

Oncall events

ISC
(22-08-2025 07:45 - 22-08-2025 08:45) From remote
Status: Ended
Description: ITF unlocks sistematically at LOCKING_CARM_NULL_3F.

TCS
(22-08-2025 08:20 - 22-08-2025 08:40) From remote
Status: Ended
Description: check on TCS_WI_CO2_POWER_CH_PICKOFF

Images attached to this report
On-call intervention (General)
Oncall-system - 9:18 Friday 22 August 2025 (67536) Print this report
On-call intervention

The following report has been submitted to the On-call interface.

On-call events -> Network

Title: EGO firewall issue

Author(s): kraja, cortese

Called at: 00:50, 22-08-2025, by: Sposito
Remote intervention: Started: 00:50, 22-08-2025; Ended: 03:45, 22-08-2025
On-site intervention: Started: ; Ended:
Status: Action pending
Operator when issue resolved: Sposito

Details:

The call was originated by the operator on duty (G. Sposito) at approximately 00:50 LT, who reported a problem with the connection to EGO services (Captive portal, DMS site, Mail, Operators) from remote.
After an initial analysis, S. Cortese was also involved (called at approximately 01:10 LT).
The analysis revealed that the problem affects the EGO firewall, more precisely one of its two nodes. Once we were certain that only one node was involved, we relocated the services to the second node, and the situation returned to normal at approximately 03:40 LT.
Tests have revealed that the problem on the first node persists, so further investigations will be carried out in the coming days to understand the type of problem encountered.
During this issue, only external connections were affected, as well as the various EGO/Virgo websites (i.e., logbook, vmd, etc.).
There was no impact on the interferometer network. In fact, for the entire duration of the issue, the interferometer remained locked in science mode.
In addition, the operator was able to connect via VPN and open thinlinc sessions as usual for the normal duty.

* Note that any files attached to this report are available in the On-call interface.

Comments to this report:
carbognani, seder - 16:10 Friday 22 August 2025 (67538) Print this report

Among the external network connections also the Low Latency Data Transfer via Kafka was also effected, resulting in Virgo data  missing on CIT side.

Predecessors of the problems were reported by the Iciga2 monitoring at the top level by flagging the Virgo Low Latency machines as not pingable: 

lowlatency-virgo is DOWN

Host check output:

PING CRITICAL - Packet loss = 80%, RTA = 8635.61 ms

Notification type: PROBLEM
Date time: Thu Aug 21 20:18:11 2025 UTC

Then the process in the LowLatencyAnalysis VPM dedicated to the monitoring of the Cascina -> CIT link (V1KafkaCITIn) reported accurately such data loss:

2025-08-22-01h23m36-UTC>WARNING-Miss 11408 seconds between 1439849472 and 1439860880
2025-08-22-01h23m36-UTC>INFO...-CfgReachState> Active(Active) Ok

At the moment, V1KafkaCITIn report those errors as warning (tipically we can have few seconds interruptions that get managed by the internal Kafka mechanism), we intend to modify the process so it goes in error state (and trigger DMS notifications for better monitoring) in case the interruption is lasti more than a certain time (to be put as a parameter for the process) 

To be noted that the data loss also happened for the incoming data but only for the LLO link, as reported by the writing process L1KafkaCasIn

2025-08-22 01h21m13 UTC FdIOGetFrame: miss 11413 seconds between 1439849472.0 and 1439860885.0

2025-08-22 01h21m13 UTC Input frames are back; gps=1439860885 latency=6.6

The LHO link was not effected. This may be due to the complexity of the network outage generated by the firewall.

Virgo Runs (O4c)
Sposito - 6:55 Friday 22 August 2025 (67534) Print this report
Operator Report - Night shift

Today, I found ITF in SCIENCE. 02:36 UTC ITF unlocked. 02:46 UTC unrequested LOW_NOISE_1  This is likely a bug where an unlock operation was erroneously triggered during the attempted acquisition of a lock, leading to a contention issue. Given that this issue has been occurring frequently, restarting the CALI node was the chosen action to resolve the problem temporarily. (see entry 67316). Additionally, I noted the effects of a magnitude 7.1 earthquake in Antarctica. As a result, I updated the system's status to EARTHQUAKE and configured the ITF to LOCKED_ARMS_IR. By the end of the shift, the effects of the earthquake were still present, and the system remained in this state.

I opened and closed the position loops for SNEB, SWEB and SPRB.

Sub-system reports

On-line Computing & Storage
22:30-02:00 UTC During the shift, I identified an issue with the internal connections of the firewall, which was causing disruptions to our network services. As this was a critical infrastructure problem, I immediately contacted the Computing on-call team to report the incident and request assistance. E. Kraja and S. Cortese performed a thorough diagnostic of the firewall's configuration and log files. They identified the root cause of the connectivity problems and implemented a corrective action.

Images attached to this report
Virgo Runs (O4c)
gherardini - 22:58 Thursday 21 August 2025 (67531) Print this report
Operator Report - Afternoon shift
The ITF unlocked at 13:07UTC; the relock was difficult because of a strong wind over the site, the situation improved during the afternoon, the ITF locked at LOW_NOISE_3 at 17:35UTC and the scheduled commissioning activity started (#67533), the ITF unlocked just at the end of the work, again the relock was a bit difficult with some unlocks at the CARM_NULL steps, science mode started at 20:36UTC.
Images attached to this report
AdV-ISC (Commissioning up to first full interferometer lock)
mantovani - 21:28 Thursday 21 August 2025 (67533) Print this report
DARM oltf studies as a function of SR wp

The shift started only at 19:30LT due to the strong wind activity that prevented to lock in the first part of the shift

DARM OLTF for different SR ty alignment (i.e. DCP)

19:47 UTC - SR TY in LN3 config 180sec DARM_noise_high ampl 8e-4 (DCP~200Hz to be calibrated)

19:57 UTC - SR TY -0.4urad 180sec DARM_noise_high ampl 1e-3(DCP~230Hz to be calibrated)

18:06 UTC - SR TY -0.9urad 180sec DARM_noise_high ampl 1e-3 (DCP~280Hz to be calibrated)

18:14 UTC - SR TY -1.3urad 180sec DARM_noise_high ampl 1e-3 (DCP~340Hz to be calibrated)

18:22:30 UTC - SR TY -1.7urad 180sec DARM_noise_high ampl 2e-3 (DCP~370Hz to be calibrated)

18:30:30 UTC - SR TY -2.16urad (approx aligned) 180sec DARM_noise_high ampl 2e-3 (DCP~430Hz to be calibrated)

SRCL SET SET has been varied having the DARM noise on

18:34:30 UTC - SRCL SET = 0   180sec DARM_noise ampl 2e-1 and then ramping up in steps (steps of ~ 4mins) .

It has to be noted that the angular loops for SR where opened when the DARM noise was on.

I arrived to something between 10 and 20Hz and I had to stop to recover the ITF fro the restart of the data taking.

While I was coming back with the ITF unlocked
Data will be analyzed offline

Virgo Runs (O4c)
tomelleri - 15:03 Thursday 21 August 2025 (67529) Print this report
Operator Report - Morning shift

ITF found locked in LOW_NOISE_3 with SCIENCE mode (Autoscience ON); BNS Range unstable, oscillating between 52-57 Mpc due to glitches observed in the 10-100Hz band of Hrec_hoft spectrum. ITF unlocked at 12:12UTC due to high wind (DMS plot in fig.1), sea and seismic activity (earthquake id: us6000r2s3) and again in LOCKING_CARM_NULL_1F at 12:25UTC, ITF set to BAD_WEATHER. ITF back in LN3 with SCIENCE mode (Autoscience ON) at 13:02UTC.

Sub-system reports

DAQ
09:51UTC - Restart of VPM processes: FFLMoni53, FFLMoni147 and FmRawLL by Masserot to fix problem with raw_ll.ffl (entry #67530)

Vacuum
12:45UTC Restart of particle counter hardware rack by Pasqualetti in CEB.

Images attached to this report
AdV-DAQ (Data collection)
narnaud - 8:15 Thursday 21 August 2025 (67530) Print this report
Problem with raw_ll.ffl

There is a problem with raw_ll.ffl: at any time (see two examples below, about one minute apart), only the last 500 seconds (rawshort) data are available. The rawback part is not updated and remains about two days old.

Thu Aug 21 08:11:47 CEST 2025

Segment list:
         + 1438823400 [2025-08-10 01:09:42+00:00 UTC] -> 1439630300 [2025-08-19 09:18:02+00:00 UTC]: duration = 806900 seconds
         - Followed by a gap of 161100 seconds
         + 1439791400 [2025-08-21 06:03:02+00:00 UTC] -> 1439791900 [2025-08-21 06:11:22+00:00 UTC]: duration = 500 seconds

Thu Aug 21 08:12:36 CEST 2025

Segment list:

         + 1438823400 [2025-08-10 01:09:42+00:00 UTC] -> 1439630300 [2025-08-19 09:18:02+00:00 UTC]: duration = 806900 seconds
         - Followed by a gap of 161150 seconds
         + 1439791450 [2025-08-21 06:03:52+00:00 UTC] -> 1439791950 [2025-08-21 06:12:12+00:00 UTC]: duration = 500 seconds

Virgo Runs (O4c)
Sposito - 6:54 Thursday 21 August 2025 (67528) Print this report
Operator Report - Night shift

ITF found in LOW_NOISE_3 and in Science Mode. It kept the lock for the whole shift.

Images attached to this report
Virgo Runs (O4c)
gherardini - 22:59 Wednesday 20 August 2025 (67527) Print this report
Operator Report - Afternoon shift

This afternoon the ITF unlocked at 15:55UTC, the relock was a bit difficult because of a windstorm passed over the site, ITF relocked and science mode started at 18:07UTC; science mode one minute at 19:24UTC for adjusting to close the SIB2 position loop.

Sub-system reports

DMS
The whole DMS system was offline for most of the shift because of a not well understood problem that filled the database and as consequence froze the database machine itself;
Elian worked to solve the problem, system recovered at around 19:20UTC.

Oncall events

On-line Computing & Storage
(20-08-2025 14:30 - 20-08-2025 19:30) From remote
Status: Ended
Description: DMS database problem

Images attached to this report
Virgo Runs (O4c)
zaza - 14:57 Wednesday 20 August 2025 (67520) Print this report
Operator Report - Morning shift

05:00 UTC ITF in SCIENCE
11:13 UTC ITF in DOWN
11:15 UTC DMS server down, IT working on it
12:09 UTC ITF in SCIENCE - relock at first attempt
12:36 UTC DMS server successfully restored

 

Images attached to this report
AdV-COM (AdV commissioning (1st part) )
ruggi - 14:56 Wednesday 20 August 2025 (67526) Print this report
BS TM DAC NOISE projection

Yesterday a noise injection on one BS TM coil has been performed, directly at the DAC level. The transfer function from the injected voltage to Hrec is shown in fig 1. Using the model of DAC NOISE obtained from the measurement on NI (see the previous entry; a P5 board is used both for NI and BS), a projection on Hrec has been obtained (fig 2). The projection has been done using the TF shown here, multiplied by two, in order to consider the contribution of four coils (assuming them equal). 

Images attached to this report
AdV-COM (AdV commissioning (1st part) )
ruggi - 14:13 Wednesday 20 August 2025 (67525) Print this report
NI TM DAC noise measurement

Yesterday the actuation of NI TM has been turned in High Power for a couple of minutes, while the ITF was in LN3. It has been done for two coils:

UL: gps 1439660088, 150s

UR: gps 1439660858, 100s

Fig 1 shows the noise injected by the drivers in the two cases, compared to clean data. One can extract a measurement of the DAC voltage noise, given that the mechanical response of the actuator in m/V is known. Fig 2 shows the result, compared to a model of DAC noise; the accordance is not bad.

Images attached to this report
AdV-PAY (NE and WE Payloads)
salvador - 13:40 Wednesday 20 August 2025 (67524) Print this report
Comment to Measurement of electric charge on WE (66317)

Here are the extracted values of the charge measurements concerning the latest injections.

The next table shows the values of the voltages as seen by the V1:Sc_WE_MIR_VOUT_XX (with XX: DL, DR, UL, UR) channels:

Mirror Coil Vinj (V)
WE DL 0.195
WE DR 0.194
WE UL 0.192
WE UR 0.191

 

The next 2 tables show the charge and voltage extracted from DARM and Hrec.

DARM:

Mirror Coil Qmir (pC) Vmir (V)
WE DL 743.8+-475.6 653.3+-417.7
WE DR 726.0+-271.6 637.7+-238.5
WE UL 966.7+-353.7 849.1+-310.7
WE UR 745.4+-434.6 654.7+-381.7

 

Hrec:

Mirror Coil Qmir (pC) Vmir (V)
WE DL 2397.4+-971.2 2105.8+-853.0
WE DR 2108.8+-678.2 1852.3+-595.7
WE UL 2747.6+-1107.0 2413.3+-972.3
WE UR 2437.9+-1304.8 2141.3+-1146.1

 

The values have significantly changed since last time. However, the line at 39.8 Hz used for normalisation was sometimes difficult to extract for being too faint which shows particularly on the values errors.

AdV-DAQ (Data Acquisition and Global Control)
masserot - 12:08 Wednesday 20 August 2025 (67523) Print this report
Comment to DAQ Maintenance activities (67512)

Investigating the DET_ADC_Moni2 ADC7674 high temperature, one can found that this ADC7674 is in this state since  the 2021-01-13, see this plot and its zoom .

It was not monitoring as far one can found in the /virgoData/VirgoOnline/backup directory

As it is in this state since more that 4 years,  its DMS flag has been shelved

Images attached to this comment
AdV-COM (AdV commissioning (1st part) )
ruggi - 11:51 Wednesday 20 August 2025 (67522) Print this report
New control filter for DIFFp_TX

Since yesterday a new control filter is running for DIFFp_TX. The aim is to improve the roll-off and reduce the noise in band. The result is visible in the reduction of correction (fig 1). The filter has a better behaviour at 1 Hz (fig 2), where sometimes an excess noise has been observed. A lower gain at low frequency should be not a problem, but we can improve that region if in bad weather condition a limited accuracy will be observed.

Images attached to this report
AdV-PAY (NE and WE Payloads)
ruggi - 10:42 Wednesday 20 August 2025 (67521) Print this report
Comment to Measurement of electric charge on WE (66317)

A new injection of line at 19.9 Hz (1 V amplitude) has been performed yesterday on WE TM coils in OPEN configuration.

DL 1439654808, 300s

UR 1439655408, 300s

UL 1439656768, 300s

DR 1439657988, 300s

 

Virgo Runs (O4c)
berni - 6:48 Wednesday 20 August 2025 (67519) Print this report
Operator Report - Night shift

ITF found in Science mode.

It unlocked at 2:31 UTC; it relocked at the first attempt and Science mode was set 3:20 UTC.

 

Guard tours (time in UTC)

21:30-22:10; 23:30-0:10; 1:30-2:10; 3:10-3:50

Images attached to this report
Virgo Runs (O4c)
tomelleri - 22:52 Tuesday 19 August 2025 (67513) Print this report
Operator Report - Afternoon shift

ITF found locked in LOW_NOISE_3 with SCIENCE mode (Autoscience ON); BNS Range ~54 Mpc. ITF set to COMMISSIONING at 14:02UTC for planned WE F7/MAR resonances and electrostatic charge measurement by Ruggi. Back in SCIENCE (with Autoscience engaged) at 19:09UTC. ITF left locked.

Guard tour (UTC)
18:03 -> 18:51

Sub-system reports

DAQ
13:50-14:53UTC Latency flags on DMS turned gray due to missing data (see entry #67514 and VIM in fig.2)

ISC
10:37UTC Violin modes excited after post-maintenance relock, see DMS flag (fig.1)

Images attached to this report
Virgo Runs (O4c)
masserot - 19:18 Tuesday 19 August 2025 (67518) Print this report
Comment to Operator Report - Morning shift (67511)

Thanks to the recovery performed by the Computing Department:

  • FmRawLL is back on olserver53
  • FmRawBack restarted
  • FFLMoni53 and FFLMoni147 restored with their standard configuration
On-call intervention (General)
Oncall-system - 17:40 Tuesday 19 August 2025 (67517) Print this report
On-call intervention

The following report has been submitted to the On-call interface.

On-call events -> Electricity

Title: Mode Cleaner UPS failure

Author(s): dandrea

Called at: 00:10, 18-08-2025, by: Menzione
Remote intervention: Started: 09:20, 18-08-2025; Ended: 18:10, 18-08-2025
On-site intervention: Started: ; Ended:
Status: Action pending
Operator when issue resolved: Tomelleri

Details:

I received a call from the control room at 12:10 AM on August 18, 2025, informing me that the mode cleaner had failed on both UPS systems.
Unable to intervene on site (vacation), the repair was postponed until the early hours of the morning, when, with the help of some colleagues, I was able to determine the nature of the failure.
We agreed to work on the UPS systems in the afternoon, as the current runtime did not allow for immediate intervention.
In the afternoon, via video link, I guided colleagues R. Romboli and Tomelleri through some operations on the UPS systems. One of them was shut down due to a fault, and the other was put back into operation normally.
Throughout the entire duration of the failure and repair work, there were no repercussions on the equipment powered by the UPS, everything proceeded transparently.

* Note that any files attached to this report are available in the On-call interface.

Search Help
×

Warning

Error

The present report has been modified outside this window. Please check for its integrity in the main page.

Refreshing this page will move this report into drafts.

×