Reports of 63138
Detector Operation (Operations Report)
zaza - 22:57 Thursday 09 April 2026 (68975) Print this report
Operator Report - Afternoon shift

The afternoon was spent recovering the ITF:
15:30 UTC end of SUS recovery of BS (Boschi)
16:00 UTC INJ recovery: RFC, beam centering (68976 De Rossi, Lagabbe, Spinicelli)
16:30 UTC ITF recovery (#68977 Bersanetti, Boldrini, Pinto)
22:00 UTC ITF left in UPGRADING, LOCKED_ARMS_IR

Images attached to this report
AdV-ISC (Commissioning up to first full interferometer lock)
boldrini, pinto, bersanetti - 22:50 Thursday 09 April 2026 (68977) Print this report
ITF recovery up to CARM_NULL_3F (unstable)

After the recovery of the RFC, of the BS and of the beam centering on the inputs, we started the recovery of the lock acquisition.

We found that the demodulation phases needed to be changed significantly to allow the lock. At the DRMI stage, the phase for MICH/SRCL was tuned through a noise injection on PRCL in the 1F stage and then adjusted by checking the coherence with the 3F counterparts before the hand-off.

  • DRMI_1F:
    • B2_56MHz: 1.43 -> -1.38
    • B2_6MHz: 1.1 -> 1.7
    • B2_QD2_6MHz_V/H: 2.8 -> 2.9
  • DRMI_3F:
    • B2_169MHz: 3.7 -> 0.0
    • B2_18Mz: 1.7 -> 1.1
    • B2_QD2_18MHz_V/H: 2.2 -> 3.14
  • DARM_IR:
    • B1p_56MHz: 4.7 -> -1.25

 

  • STEP1:
    • B2_169MHz: 3.5 -> -0.2
    • B1p_56MHz: 4.9 -> -1.25
  • STEP2:
    • B2_169MHz: 3.6 -> -0.1
    • B1p_56MHz: 4.8 -> -1.55
  • STEP3:
    • B2_169MHz: 3.9 -> -0.25
    • B1p_56MHz: 4.45 -> -1.95
    • B1p_56MHz_H/V: 3.3 -> 2.9/3.1

 

  • CARM_NULL_3F:
    • B4_6MHz: 1.55 -> -0.1
    • B1p_56MHz: 4.55 -> -1.85
    • B2_169MHz: 3.9 -> -0.25
    • B1p_56MHz_H/V: -> 2.4/2.4

While the lock acquisition up to CARM_NULL_3F is reliable, the ITF cannot maintain the lock more than a couple minutes in this state, due to an issue similar to what was observed about a month ago (68724): noise on DIFFp propagates to longitudinal signals and kills the lock in a short time. Unfortunately the attempts to tune parameters before the unlock were unsuccessful, the best way forward would likely be to disengage DIFFp while the state is stabilized. Noticeably, the phase for B2_169MHz is likely mistuned, as the coupling between MICH and PRCL is high, but we could not tune it in time before each unlock.

We leave the ITF with the arms locked.

Images attached to this report
AdV-INJ (Reference cavity (RFC))
derossi, lagabbe, spinicelli - 18:49 Thursday 09 April 2026 (68976) Print this report
Centering of the input beam on ITMs and RFC alignment

Triggered by the low power of RFC TRA, we started to move the piezo of M15 and M16 to try to better realign it. We could optimize from 2.4 V to 2.8 V, but then we realized that yesterday at around 12 UTC the transmission crossed a maximum above 3 V, and this was in correspondance to the alignment of the north arm and a movement of the PR transversal position. 

Moreover, since after the shutdown the beam was no more centered on the ITMs, we moved PR X and Y to recenter it, with the arms locked (see attached plot).

After moving the PR the RFC TRA power was low again and by undoing the steps done at the beginning on M15 and M16 we could recover 3.2 V.

Images attached to this report
Environmental Monitoring (Environmental Monitoring)
fiori, tringali - 15:43 Thursday 09 April 2026 (68971) Print this report
Comment to grounding injection test (68882)

On March 23 the same grounding current injection setup was used to inject sinus lines. The scope was to observe if any non linearity is produced in hrec, which the comb injection might have hidden. 

The UTC times are:

  • 11 Hz line from 17:20:00  to 17:29:00
  • 21 Hz line from 17:30:30  to 17:35:30

Both injections produce a large effect in Hrec (Figure 1). A large peak is seen in hrec at the same frequency of the line. As well, a large magnetic line is produced by the injection and observed by CEB magnetometers. This effect is somehow similar to what observed when the comb was injected (see mother elog).

Some non linear effect is also seen in Hrec. 

  • in the case of the 11 Hz line injection, a small (4 orders of magnitude smaller!) peak at 33 Hz (3rd harmonic) is also excited in Hrec (Figure 8). Some short excitation at 22 Hz  is also visible in hrec when the 11 Hz excitation onsets (Figure 3).
  • in the case of the 21 Hz line injection, the noise seen in hrec has a broad structure with sidebands, at +-1.4Hz and about +-3.3 Hz (Figure 10). Also, a 63 Hz (3rd harmonic, as well 4 orders of magnitude smaller) is excited (Figure 9).

Switching off the line at 17:35:30 caused the ITF to unlock (Figure 7).

One reason for a non linear coupling might be that the injected noise produce an oscillating electrostatic force on the charged mirror. In the past this was done by directly injecting a common mode voltage into the actuation coils (see https://logbook.virgo-gw.eu/virgo/?r=48306 and links therein) for the purpose of measuring the mirror charge. 

Note that, in case the noise in hrec is due solely to an electrostatic force on the charged mirror we would expect a sinusoideal electrostatic force at f to cause the mirror to move at both f and 2f if the mirror is charged, and only at 2f if the mirror is not charged (as described in page 15 of https://www.mdpi.com/2075-4434/8/4/82). The result we observe here is not easily interpreted as this kind of effect.

Images attached to this comment
Detector Operation (Operations Report)
gherardini - 15:23 Thursday 09 April 2026 (68973) Print this report
Operator Report - Morning shift
The main activity of the morning was the WE suspension electronic fix (https://logbook.virgo-gw.eu/virgo/?r=68969), work completed at around 9:45UTC; in the meantime Alain work on the problem of missing data (https://logbook.virgo-gw.eu/virgo/?r=68970) and the inj team working on the glitches (https://logbook.virgo-gw.eu/virgo/?r=68972); after lunch we started to relock, at least the two cavities, discovering a problem with the BS suspension electronic, now Valerio and Marco are working on this issue.
AdV-INJ (SIB1 and MC end mirror local controls (standardization of LC system for AdV))
ruggi, melo, spinicelli, pinto - 14:47 Thursday 09 April 2026 (68972) Print this report
Glitches on MC payload control

A periodic disturbance on several INJ signals has been observed after the restart of the system. After an investigation on several possible causes, a correlation with the DC value of MC MAR angular corrections has been found. The glitches disappeared after the zeroing of all the DCs by means of the motors.

Images attached to this report
AdV-DET (Commissioning)
gouaty, masserot, pacaud - 12:37 Thursday 09 April 2026 (68970) Print this report
Investigations on missing data reported by SDB2_LC and SQZ_CTRL after an update of the B1_PD{1,2}_DC offset

Yesterday the B1_PD{1,2}_DC offsets wee adjusted by Romain:

  • 2026-04-08-19h18m30-UTC    info gouaty       SDB2_dbox_bench saved - gouaty: adjust B1_PD2 offset (rev. 269159)
    2026-04-08-19h19m45-UTC    info gouaty       SDB2_dbox_bench saved - gouaty: adjust B1_PD1 offset (rev. 269160)
    2026-04-08-19h19m48-UTC    info gouaty       'Reload config' sent to SDB2_dbox_bench

After this operation, the SDB2_LC and the SQZ_CTRL became blue in VPM interface complaining about possible missing samples

This morning we made some investigations on the rtpc1 servers

  • by changing  the TOLM_PROCESSOR_LOOP delays from 25us to 28us and then restarting most of  the rtpc1's servers execpt the DaqBox ones 
  • and then reconfiguring the   SDB2_MezzPD0  and the  SDB2_MezzPD1 service mezzanines 
  • but without any improvments 

We put back the standard  TOLM_PROCESSOR_LOOP value (25us)  and we  continued with the investigations . We found that

  •  the SDB1_FAST_SHUTTER server task order can change . To fix its order a dependancy with the SQB1_Quadrants server has been added and now this server is always excuted after the SQB1_Quadrants one . Note that the SQB1_FAST _SERVER is the server driving  the OMC dac channels 
  • re-starting any ACL's server on the rtpc1, is enough to recover the correct running conditions for the 2 faulty servers

At the end, as a simple workaround, a reloadConfig  of the SDB_EDB_Tpro server allows recovering the correct running conditions without stopping any ACL's servers

 

AdV-SAT (Subsystem Management)
Boschi, Masserot, Piendibene - 12:05 Thursday 09 April 2026 (68969) Print this report
Comment to Suspensions Recovery after computing shutdown (68956)

Yesterday we recovered NE tower and the top stage of WE. Unfortunately Sc_WE crate was not able to transmit data to the DAQ. Investingating the issue we identify the problem: data transmission stopped as soon as the p6 DSP connected to the mirror (serial #44226) was inserted in the crate. This morning the board FPGA was reprogrammed and tested and the board has been re-inserted. WE control system seems now to be back to standard operational condition. The full suspension recovery should be now complete. We will have the final confirmation at next lock.

Detector Operation (Operations Report)
berni - 21:46 Wednesday 08 April 2026 (68967) Print this report
Operator Report - Afternoon shift

ITF found in UPGRADING mode and DOWN state, with SAT recovery in progress:

  • Valerio was at WE working with Alain, trying to fix the missing data issue for the WE suspension;
  • Paolo was closing the loops for all other suspensions.

At around 15:00 UTC, Paolo restored all suspensions (except for WE). Following this, I recovered the SNEB vertical position and closed the loops for SNEB, SDB1, and SDB2.

We then proceeded with the ITF recovery (see 'Recovery part 1' for more details).

From 18:00 UTC to 19:30 UTC, Romain performed a check of the OMC in NI single bounce, see  (see DET recovery for more details).

 

ITF left with Nort arm locked.

 

Software

At 19:30 UTC I restarted the BacnetServer because it was providing flat data.

 

AdV-DET (Commissioning)
gouaty, masserot, lunghini, berni, bersanetti, casanueva - 21:44 Wednesday 08 April 2026 (68965) Print this report
DET recovery

2026/04/03:

After the DAQ recovery ( https://logbook.virgo-gw.eu/virgo/?r=68964 ), all quadrants were recovered (shutters opened and Vbias enabled), and we enabled back the photodiodes used during the lock acquisition.

2026/04/07:

We recovered the angular control and the position control of all the minitower detection benches (SDB2, SPRB, SIB2, SNEB, SWEB) as well as SDB1 (after the suspension team recovered its suspension).

2026/05/08:

DET_MAIN node recovered in SHUTTER CLOSED state after adjusting the B1s and B1_PD3 electronic offsets, and with a beam visible on the B1p camera.

Adjusted demodulation phases for the OMC lock error point (B1_PD3_f1 and B1_f1) with NI single bounce beam. For B1_PD3 the demodulation phase was changed from 1.5 to 3.4 rad (Fig.1). For B1 the demodulation phase was changed from -0.7 to +1.0 rad (Fig.2).

We also adjusted the offsets of the B1 PD1 and PD2 photodiodes.

Images attached to this report
AdV-COM (AdV commissioning (1st part) )
casanueva, derossi, ruggi, bersanetti, berni - 18:56 Wednesday 08 April 2026 (68968) Print this report
Recovery part 1

So this afternoon there was some work on recovering the WE DSP. In the meantime we finished aligning the North arm and we locked without major problems.

During the first part of the afternoon the INJ was a bit unstable most probably due to the works outside the MC building. Once it locked in a stable way, we tried an FMODerr, whose new working point was very far from the previous one.

Once this was done we re-tested the lock of the north arm and we proceed to align the west arm. Valerio informed us that the WE MIR board was not working, but that it shouldn't prevent it from locking (in principle we only use the mirrorrs of the end test masses in LN). However, after we realigned and we tried to lock, we realized that the drift control was not properly working. Indeed only the lines in the WI mirrors were turned on, the ones on the BS and WE were not, and in this way the drift control would misalign the cavity. Paolo checked and there was some flag that was passing through the WE mirror board, so we decided to stop for the day, since we need to fix the WE board problem anyway.

At this point we decided to test the NE ALS loops. They worked fine. We stopped here for the day.

Detector Operation (Operations Report)
lunghini - 15:48 Wednesday 08 April 2026 (68963) Print this report
Operator Report - Morning shift

ITF found in UPGRADING Mode and DOWN State.
All times are UTC.
08:24 PR vertical and longitudinal position restored (Operator and Ruggi). NI, WI, and BS, TX, and TY restored to the values of the lock of Mar 24, 2026 00:00:00;
08:39 SWEB_SBE, and SNEB_SBE Position loops opened manually (Gouaty).
09:00 Started OMC Lock requested by Was, and Gouaty setting manual MISALIGNED State for WI with no control of WE, and NE suspensions, since the recovery of those suspensions was not started yet. Activity concluded at 09:10 due to loss of connection with all the suspension due to the stop of the SatServer, and SusDAQBridge VPM processes, caused by the SAT activity. 
09:15 SDB2_SBE, and SPRB_SBE Position loops opened manually (Gouaty).
12:05 TCS_AuxCooling_WI_FLOW flag in the DMS started oscillating between red and yellow state, the temperature (TCS_AuxCooling_Tank_TE) is rising since 09:00 UTC but it is below the threshold (see Figure 1), TCS experts informed and check of the system ongoing (Zaza). These oscillations may be related to the Air Conditioning system, Soldani is going to TCS Room to investigate the issue.
12:40 Physical restart of the chiller of the Neovan in order to restert the VPM process LaserGuardianINJ (Spinicelli).
ITF left in UPGRADING Mode and DOWN State. Recovery of SAT is still ongoing.
 

Images attached to this report
IT infrastructure (Computing)
salconi - 12:00 Wednesday 08 April 2026 (68966) Print this report
Activities performed during the computing shutdown
The list of main tasks performed during the shutdown is as follows:

- FreeIPA update (Kraja)
- Xenhosts update from citrix to xcp-ng (DiBiase)
- MBTA xcp-ng update from 8.2 to 8.3 (DiBiase)
- ST storage nodes nodes firmware update (Salconi)
- ST storage nodes OS update, from RedHat 7.9 to 8.10 (Salconi - Kraja)
- Dell SC9000 firmware update (-)
- Firewall software update (Cortese)
- Mailing lists server software update (Cortese)
- Main file server upgrade tests (Kraja - Cortese)
- Inventory and labelling of computer room electric lines (Kraja - Cortese)
AdV-DAQ (Data Acquisition and Global Control)
gouaty, letendre, masserot, mours, pacaud, viret - 11:28 Wednesday 08 April 2026 (68964) Print this report
DAQ restart after the computing shutdown

 2026-04-02

The DAQ recovery, after the computing shutdown,  started on 2026-04-02-18h28m45-UTC once the CmNames server and the VPM server were restarted and that all the expected hosts (rtpc, olserver and stol) were alive .

We restarted  fisrt all the storage and data collection parts

  • raw stream on stol01 and stol02  hosts
  • rawfull, rds and trend streams on stol03 host 
  • the data collection parts running  on
  • operations performed between 2026-04-02-18h28m48-UTC and  2026-04-02-18h38m44-UTC 

As everything seems to run correctly , we continued by restarting all the servers involved

  • in the data access part 
  • in the Detector Monitoring System 
  • in the Automation part 
    • The PyHVAC server was unable to retart due to
      • the missing /dev/shm/zFbsIng imput SHM
      • and later the missing  input channels  V1:HVAC_INJ_TE_OUT_SET 
      • server fully recovered the  2026-04-07-08h27m37-UTC
  • in the Image readout 
  • operations performed  between 2026-04-02-18h39m15-UTC and 2026-04-02-19h08m44-UTC

We left all DAQ running during 1 hour to check that everything ran smoothly, thank to the availabilty of the DMS and VIM web pages.

 Then we continued by

We left all the things running during the nigth to perform a more accurate check the next day


 2026-04-03

We found that a lot of Ethernet devices (lnfs, sqz, power suppplies ) not any more recheable .  

For  DAQ/DET ethernet devices

  • the DaqBoxes  were reachable 
  • the SBD power supplies were not reachable.
  • the SDB internal switches were reachable 
  • the SWEB QD  RD482-ETH bridge was not reachable, meaning the B8_QD{1,2}  shutters and Vbias were not anymore remotely driven and monitored

Thanks to the local support, the  SBD power supplies  remote monitoring/driving were recovered . 

To recover remote monitoring/driving B8_QD{1,2},

  • the SWEB_PowerUnit12 was turned off/on and fortunately it was enough to recover this device.
  • as the internal Timing mezzanine was switched OFF/ON to, the SWEB bench DBoxes were reconfigured 
  • to recover the correct running conditions , the SWEB rach DBoxes were reconfigured too and the Acl's SWEB_rtpc servers restarted
  • operations performed between 2026-04-03-09h19m33-UTC  and 2026-04-03-09h37m46-UTC

Easter break

During the Easter break, the olserver115 became unreachable around 2026-04-03-19h34m38-UTC . The MdVim server has been restarted temporaly on the olserver118  at 2026-04-04-04h20m30-UTC.

The olserver115 host has been recovered by the IT department and the MdVim server is now running on its host since  2026-04-07-06h32m25-UTC


2025-04-07

All the Fbs servers were restarted with a new release v8r24p0 

  • operations performed between 2026-04-07-15h03m20-UTC and 2026-04-07-15h08m29-UTC
Injection system (General activities)
derossi, lagabbe - 17:37 Tuesday 07 April 2026 (68961) Print this report
INJ restart after computing shutdown

This morning we restarted the injection by:

- closing the EIB suspensions 

- restoring the previousl pump currents in the Neovan diodes (5 A and 4.9 A respectively)

- restoring the previous currents to the SL diodes (29.1 A each)

- locking PMC, closing BPC and manually aligning the IMC to reach the threshold for the lock and the automatic alignment to be engaged 

There is slighlty less power than before the shutdown, both in transmission of the Neovan and the PMC (4% of power missing).

From the camera the output shape and position of the neovan seemed ok (picture 1), so we went to the laser lab to align the PMC. With a very little tilt in the vertical direction we minimized the 01 peak, however the target power was not recovered (0.75 V vs 0.78 V).

The RFC has less power in transmission but we will wait for the ITF to be aligned because it has an influence on it.

 

Images attached to this report
AdV-TCS (TCS studies actuation (simulations + experiments))
nardecchia - 17:30 Tuesday 07 April 2026 (68962) Print this report
Comment to TCS restart (68957)

This afternoon, a few hours after the restart of all CO2 lasers, thermal camera images of both CO2 benches were acquired in order to check the shape and position of the DAS rings/CH for both WI and NI.

In Figures 1 and 2, today’s images are compared with those acquired on 2026-03-10.

No changes in the shape or position of the actuators are observed compared to previous acquisitions.

Images attached to this comment
AdV-SAT (Subsystem Management)
Boschi, Piendibene - 17:28 Tuesday 07 April 2026 (68956) Print this report
Suspensions Recovery after computing shutdown

CEB (BPC, IB, BS, NI, WI, PR, SR, SDB1) and MC SAT control systems have been recovered. BPC, IB and MC have been restored on the afternoon of last Friday to allow INJ subsystem recovery. The rest of CEB tower control systems have been restored today. Tomorrow we will proceed with the recovery of NE and WE SAT control systems.

Comments to this report:
Boschi, Masserot, Piendibene - 12:05 Thursday 09 April 2026 (68969) Print this report

Yesterday we recovered NE tower and the top stage of WE. Unfortunately Sc_WE crate was not able to transmit data to the DAQ. Investingating the issue we identify the problem: data transmission stopped as soon as the p6 DSP connected to the mirror (serial #44226) was inserted in the crate. This morning the board FPGA was reprogrammed and tested and the board has been re-inserted. WE control system seems now to be back to standard operational condition. The full suspension recovery should be now complete. We will have the final confirmation at next lock.

Detector Operation (Operations Report)
gherardini - 17:21 Tuesday 07 April 2026 (68959) Print this report
Operator Report - Daily shift
The morning was dedicated to weekly maintenance started at 7:00UTC, here a list of the activity reported to the control room:

- standard vacuum refill from 6:00UTC to 10:00UTC (VAC Team);
- cleaning of central and west end buildings (Menzione with external firm: from 6:00UTC to 10:00UTC);
- INJ: system restart;
- TCS: system restart (#68957);
- SUSP: system restart (#68956);

activity stopped at 14:50UTC, the suspension recovery will go on tomorrow morning.
Environmental Monitoring (Environmental Monitoring)
Tringali - 16:24 Tuesday 07 April 2026 (68960) Print this report
LargeCoil processes restart

This morning, an attempt was made to restart the LargeCoil processes in all three buildings via VPM, but it failed.
The issue was fixed by disconnecting and reconnecting the Ethernet cable of the coil, restoring proper communication.

AdV-TCS (TCS studies actuation (simulations + experiments))
nardecchia - 14:17 Tuesday 07 April 2026 (68958) Print this report
Comment to TCS restart (68957)

Regarding the PR CHRoCC, I realized that I made an error: 2.796 V corresponds to the reference value for Vout, not to the VDAC.
The correct reference VDAC is 3.88 V.
Therefore, at 12:10 UTC, I set the VDAC to 3.88 V with a ramp time of 3200 s.

AdV-TCS (TCS studies actuation (simulations + experiments))
gherardini, menzione, nardecchia - 12:23 Tuesday 07 April 2026 (68957) Print this report
TCS restart

Here is the list of all actions performed to restart the TCS.

  • RHs:

Restarting the process alone was not sufficient.
As usual, it was necessary to physically power cycle the power supplies. Therefore, Fabio went to the Central Building, then to the NE, and finally to the WE in order to perform this operation. After this intervention, it was possible to restart all the processes.  After that, all RHs were switched on again:

- SR RH: V=23.13 V at 08.54 UTC
- NE RH: V=16.6 V at 08.56 UTC
- WE RH: V=17.4 V at 08.59 UTC

  • PR CHRoCC (the PR CHRoCC process was already restarted):

-The voltage was re-applied to the CHRoCC (V = 2.796 V, ramp time = 7200 s) at 08:35 UTC.

  • CO2 LASERS:

The restart sequence was performed in the following order:

  1. Main laser chillers
  2. WI CH
  3. NI CO2 main laser
  4. NI CH
  5. NI CO2 main laser

Nicola then re-enabled the Guardian system.
All operations were carried out between 09:00 and 09:30 UTC.

At around 10:08 UTC, Fabio returned to the TCS room because the DMS was showing the CO2 laser interlock-related channels as grey.
He disconnected and reconnected the network from the setup, after which the data were restored.

All TCS actuators have been switched on again.
The situation will be monitored over the next hours.

Images attached to this report
Comments to this report:
nardecchia - 14:17 Tuesday 07 April 2026 (68958) Print this report

Regarding the PR CHRoCC, I realized that I made an error: 2.796 V corresponds to the reference value for Vout, not to the VDAC.
The correct reference VDAC is 3.88 V.
Therefore, at 12:10 UTC, I set the VDAC to 3.88 V with a ramp time of 3200 s.

nardecchia - 17:30 Tuesday 07 April 2026 (68962) Print this report

This afternoon, a few hours after the restart of all CO2 lasers, thermal camera images of both CO2 benches were acquired in order to check the shape and position of the DAS rings/CH for both WI and NI.

In Figures 1 and 2, today’s images are compared with those acquired on 2026-03-10.

No changes in the shape or position of the actuators are observed compared to previous acquisitions.

Images attached to this comment
Environmental Monitoring (Environmental Monitoring)
Tringali, Lunghini - 18:28 Friday 03 April 2026 (68955) Print this report
Magnetic spot measurements at future environmental station site

This morning we performed spot magnetic measurements in the area where the new environmental monitoring stations will be installed. Excavation works in this area are scheduled to start after the Easter break.

Measurements were carried out by placing the magnetic probe approximately at the center of the area and then repositioning it to different locations at a distance of about 8 m from the center within the marked perimeter, Figure 1. The magnetic field was measured using a Bartington Mag-03MC100 sensor (range ±100 μT, calibration factor 100 mV/μT), connected to a portable data logger. The probe was consistently oriented with its z-axis (ch-3) pointing towards North during the campaign. For each position, spectral measurements were acquired and stored.

During the measurements performed at WEB and CEB, we observed the presence of a noise source, switching on and off in a non-regular manner.

This behaviour was identified through the magnetic spectra: when the source was not active, only the X-axis (ch1-x) showed a significant reduction of the noise floor, reaching values of a few pT/√Hz for frequencies above ~90–200 Hz. The Y and Z axes did not show any noticeable variation and remained unchanged regardless of the source activity, Figure 2.

The fact that this feature was not observed in the North position could be related to the source being continuously active during that specific measurement. 

It is worth mentioning that agricultural activities were ongoing in the vicinity of the site during all the measurements.

In addition, distinct spectral features are observed around ~23 Hz and ~32 Hz, consistently present in all the acquired measurements, Figure 3.

Below, we report the spectral amplitudes of the main power-related lines at 50 Hz and its harmonics (150 Hz and 250 Hz). The values have been converted into magnetic field units using the sensor calibration factor, corresponding to 1 μV = 0.01 nT. 
For some positions, measurements were taken both when the magnetic noise source was active and when it was inactive, since its disappearance was noted during data acquisition, thus allowing both conditions to be recorded. In addition, dedicated measurements were performed by placing the probe in proximity to the external magnetometers.

***************** NEB ************************************

Position Axis 50 Hz [nT/√Hz] 150 Hz [nT/√Hz]
Center X 0.971 0.046
  Y 0.271 0.043
  Z 0.557 0.046
       
North (+8m) X 1.17 0.045
  Y 0.384 0.040
  Z 0.644 0.040
       
South (+8m) X 1.35 0.055
  Y 0.395 0.041
  Z 0.715 0.058

 

********************* WEB**************************

Position Axis 50 Hz [nT/√Hz] 150 Hz [nT/√Hz] 250 Hz [nT/√Hz]
Center X 0.651 0.039 0.074
  Y 1.89 0.044 0.136
  Z 0.381 0.036 0.030
         
North (+8m) X 0.586 0.040 0.051
  Y 1.91 0.051 0.077
  Z 0.330 0.038 0.028
         
West (+8m), noise source OFF X 0.581 0.017 0.063
  Y 1.63 0.053 0.104
  Z 0.364 0.047 0.032
         
West (+8m), noise source ON X 0.407 0.065 0.061
  Y 0.988 0.067 0.088
  Z 0.250 0.055 0.033

 

************************ CEB **************************************

Position Axis 50 Hz [nT/√Hz] 150 Hz [nT/√Hz] 250 Hz [nT/√Hz]
Center, noise source OFF X 0.452 0.031 0.026
  Y 1.98 0.046 0.063
  Z 1.46 0.062 0.051
         
Center, noise source ON X 0.517 0.060 0.044
  Y 1.96 0.045 0.060
  Z 1.44 0.061 0.059
         
South (8m) X 0.446 0.063 0.036
  Y 2.14 0.052 0.071
  Z 1.45 0.080 0.073
         
East (8m) X 0.328 0.052 0.029
  Y 1.61 0.056 0.052
  Z 1.32 0.059 0.059
         
West (8m),  X 0.763 0.054 0.056
  Y 2.68 0.055 0.077
  Z 1.50 0.070 0.056
         
North (8m), noise source OFF X 0.487 0.020 0.014
  Y 1.55 0.050 0.043
  Z 1.29 0.073 0.048

***************External magnetometers*****************

Position Axis 50 Hz [nT/√Hz] 150 Hz [nT/√Hz] 250 Hz [nT/√Hz]
North X 0.556 0.099 0.054
  Y 1.80 0.050 0.057
  Z 1.61 0.186 0.124
         
West X 0.497 0.058 0.048
  Y 1.62 0.045 0.050
  X 1.56 0.052 0.055

 

Images attached to this report
AdV-EGO (Electrical System)
dandrea - 10:07 Friday 03 April 2026 (68954) Print this report
New UPS for computer rooms
As scheduled for March 30 - April 2, we have installed the new UPS that powers the computing.
The new 315 KW machine replaces the two 200 KVA parallel machines, which remained in service for twenty years.
The new UPS is of the modular type, in the sense that it is composed of five 67 KW UPS modules, which can be hot replaced in the event of a failure of any module.
Compared to the past, during this work, we decided to move the line that powers the computer room conditioning from the IPS network to the UPS network, so as to have the entire data and cooling infrastructure powered by UPS.
While installing the new machine, we were also able to check the condition of the existing battery packs, and unfortunately we had to exclude one of the two branches, as it did not guarantee operating performance ...
We have left in operation only the battery branch considered efficient, which can guarantee battery life for 20 minutes at 170 KW.
Given that the computing load, including the air conditioning system, is around 100 KW, we could currently have around 50-60 minutes of battery life with the UPS powered by batteries.
We are evaluating the replacement of the two battery packs, in order to bring the 60-minute battery life to 200 KW.
The new machine has a much higher energy efficiency than the previous ones, this means that we will have a significant energy and therefore economic saving, estimated at around 20 K€/year, which will allow us to recover the investment costs of the new machine in about two years.
Online Computing (Virgo Platforms and Services)
carbognani - 18:08 Thursday 02 April 2026 (68953) Print this report
Computing shutdown: main core services restored and detector computing and storage services restore ongoing

In line with the planning detailed here:

https://indico.igwn.org/event/7/

Today we proceeded with the 2nd Computing core services shutdown starting at 12:00 and with UPS electrical shutdown in the Control Building lasting one hour starting around 13:00.

The new UPS serving the Computing Room has been put in operation and Computing core services has been progressively restored during the afternoon and made available around 18:00.

To proceed quickly with some test of the authentication chain the Cm and VPM hosting machines (olserver111 and olserver113) has heen restarted and the Cm servers for Cascina, CascinaLV, CascinaTest, CascinaLVTest (and their corresponding VPM servers) have been restarted without problems.

The rest of detector computing and storage services are being restarted with the goal to be ready early tomorrow morning to resume the DAQ chain possibly until making operative the Detector Monitoring System.

 

 

AdV-TCS (Point Absorbers Mitigation)
lorenzini, lumaca, gherardini, rossi - 16:58 Tuesday 31 March 2026 (68952) Print this report
Transport of the WI PA actuator on site

On Monday, March 30, we transported the WI PA actuator from Rome Tor Vergata to the site. Thanks to Fabrizio, we unloaded the actuator at the entrance of the Central Building, leaving it there until the following day before moving it to the TCS room.
During the maintenance day on Tuesday, March 31, we mounted the lifting frame used to sling the actuator to the overhead crane (Fig. 1). Fabio then helped us operate the crane and move the actuator toward the TCS room (see Fig. 2).
We placed the actuator on a trolley and moved it inside the TCS room. There, we installed the heater source and its paraboloid inside the actuator structure (Fig. 3), then left it in the TCS room properly covered with plastic film (Fig. 4).

Images attached to this report
Search Help
×

Warning

Error

The present report has been modified outside this window. Please check for its integrity in the main page.

Refreshing this page will move this report into drafts.

×