Reports of 42118
Detector Characterisation (Broadband noise)
mwas - 15:44 Sunday 18 August 2019 (46696) Print this report
Comment to Missing 4 Mpc (46691)

I can confirm that the PRCL noise is mostly gone, however it is still higher than before Aug 7. There is still some contribution, especially below 50Hz, but the impact on the range is small. With offline h(t) cleaning I manage to improve the range only by 0.7Mpc.

Images attached to this comment
Virgo Runs (O3)
Nenci - 15:00 Sunday 18 August 2019 (46695) Print this report
Operator Report

During the morning the ITF was stable locked with BNS Range ~43 MPC, it unlocked once at the end of the shift (12:30 UTC - picture attached), relock is in progress at 13:00 UTC.

VPM - 12:55 UTC: INJ_MAIN crashed/restarted.

Guard tours (times in UTC)

  • from 5:20 to 5:54;
  • from 7:06 to 7:38;
  • from 9:01 to 9:31;
  • from 11:10 to 11:50.
Images attached to this report
Virgo Runs (O3)
Gherardini - 6:55 Sunday 18 August 2019 (46694) Print this report
Operator Report - Night shift
This night the ITF unlocked from science at 22:02UTC because of a fail of the EIB SAS control that started to oscillate (see plots); the on-site intervention of ISYS expert (Antonino) has been needed: to restore the EIB SAS control it is necessary to block the laser beam on the bench, at ~23:00UTC I started to relock the ITF, before each lock attempt I stopped/restarted all the metatron nodes (except the suspension ones):

two fails with ITF_LOCK that go on moving DARM to B12s even if the OMC1 not locked and OMC_LOCK in TEMP_COMTROLLED state (in both cases the WI local controls opened by the guardians)

TF relocked at the third attempt, science mode started at 00:23UTC.

-guard tours(UTC)
0:20 --> 0:55
2:35 --> 3:05
4:05 --> 4:35
 
  


SUSP

At 23:27UTC and 23:46UTC the WI local control opened by the guardian following the ITF unlock, properly closed.



ISYS

(17-08-2019 22:00 - 17-08-2019 23:15) On site

Status: Ended

Description: Oscillation of EIB SAS controls.

Actions undertaken: Block the laser beam and open/close the EIB controls.



Images attached to this report
Virgo Runs (O3)
Montanari - 23:19 Saturday 17 August 2019 (46693) Print this report
Operator Report - Afternoon shift

16:53UTC ITF unlocked for no obvious reasons (see plot)

After the unlock, there were a lot of problems with the automation.
I had to restart the metatron nodes, to reload INJ_MAIN and to misalign the PR manually more times

After several attempts, I was be able to relock at 19:14UTC
19:15UTC Science mode

Guard Tours
13:12UTC - 13:40UTC
15:10UTC - 15:37UTC
17:01UTC - 17:29UTC
19:10UTC - 19:36UTC

DAQ
TFMoni process restarted after a crash at 13:11UTC, 14:34UTC

17:04UTC, 17:40UTC, 18:29UTC
INJ_MAIN, DET_MAIN, SQZ_MAIN, NARM_LOCK, WARM_LOCK, CARM_DARM_LOCK, MICH_LOCK, PRCL_LOCK, OMC_LOCK, ITF_LOCK, CALI stopped and restarted

17:34UTC ITF_LOCK restarted after a crash

Images attached to this report
AdV-TCS (CO2 laser projector)
nardecchia - 21:48 Saturday 17 August 2019 (46692) Print this report
NI CO2 DAS powers jump
Looking at the data of today, a jump in the NI CO2 DAS powers is visible at around 12.45 UTC, probably due to a small laser temperature variation.
This change has an effect on B1p that becomes darker and on the CMRF that increases.
Some effects are visible also in Hrec_OgNE e Hrec_OgPR.
The sidebands do not show any effect (see attached figure).

Images attached to this report
Detector Characterisation (Broadband noise)
ruggi - 16:15 Saturday 17 August 2019 (46691) Print this report
Missing 4 Mpc

When the range passed from 48 to 42 Mpc, one identified guilty was the worsened working point. Today, as already pointed out by Matteo, the working point moved back towards the good condition, and some improvement of the sensitivity has been observed. Here a plot showing the integrated range is added (fig 2): almost 2 Mpc has been gained between 60 and 150 Hz. I think this is mainly due to PRCL noise which is gone, but Michal can evaluate this more precisely than me.

4 Mpc are still missing. The working poiint could be not yet perfect, perhaps a check of alignment setpoint could provide the usual improvement of high frequency CMRF and DET scattered light coupling, but an improvement of 4 Mpc is not expected.

Looking at the sensitivity (fig 3), a problem at the level of high frequency floor is visible. A check of SQZ performance might be useful. If something is wrong there, maybe also some structure (40 Hz, 55Hz, 120 Hz, 160 Hz) could be the scattered light already seen when SQZ was misaligned?

Images attached to this report
Comments to this report:
mwas - 15:44 Sunday 18 August 2019 (46696) Print this report

I can confirm that the PRCL noise is mostly gone, however it is still higher than before Aug 7. There is still some contribution, especially below 50Hz, but the impact on the range is small. With offline h(t) cleaning I manage to improve the range only by 0.7Mpc.

Images attached to this comment
Virgo Runs (O3)
Sposito - 14:58 Saturday 17 August 2019 (46687) Print this report
Operator Report - Morning shift

Today, I found the ITF unlocked. It was impossible to relock the ITF, there were lots of grey flags on the DMS and seems that a lot of data were missing. So I called the DAQ on-call L.Rolland (5:15 UTC) for a check on the Data,  from his point of view there weren't any problems with that, but maybe something related to Tango. I contacted the Computing & Storage on-call A. Bozzi (6:08 UTC) for the Tango issue and she suggested me to call a dedicated expert, so I called F. Carbognani (6.15 UTC)  that restarted the whole Tango SAT machinery on olserver120, all the application servers, the Starter, and the database interface databases. I was still not able to lock so Franco suggested me to call the SAT expert. 7.45 UTC I called SAT expert V.Boschi that again performed some checks but do not find the solution so he suggested to contact and ISC expert. 7.40 UTC I called the ISC on-call J. Casanueva and she told me to restated all the Metatron nodes (sat nodes excluded) and re-try the lock. This procedure worked and I was able to relock. ITF back in SCIENCE mode 8.18 UTC with ~45 Mpc.

Other
Vpm
07:42 UTC INJ_MAIN, DET_MAIN, SQZ_MAIN, NARM_LOCK, WARM_LOCK, CARM_DARM_LOCK, MICH_LOCK, PRCL_LOCK, OMC_LOCK, ITF_LOCK, CALI stopped and restarted
Guard tours (times in UTC)
from 05:29 to 06:00;
from 07:09 to 07:40 ;
from 09:29 to 09:58;
from 11:08 to 11:37;

SUSP
5.05 UTC I closed the WI guardians (MAR_GRD_TX, TY and, TZ).

DAQ
(17-08-2019 05:15 - 17-08-2019 06:00) From remote
Status: Ended
Description: Missing data investigation

ISC
(17-08-2019 07:45 - 17-08-2019 08:00) From remote
Status: On-going
Description: J. Casanueva suggested a procedure in order to be able to relock the ITF

Other
(17-08-2019 06:15 - 17-08-2019 07:15) From remote
Status: On-going
Description: F. Carbognani restarted the whole Tango SAT machinery on olserver120, all the application servers, the Starter, and the database interface databases.

SUSP
(17-08-2019 07:15 - 17-08-2019 07:45) From remote
Status: On-going
Description: V.Boschi performed some checks on the SAT electronics boards

Detector Characterisation (Broadband noise)
ruggi - 13:51 Saturday 17 August 2019 (46690) Print this report
Comment to Drop of sensitivity (46638)

At about 7.30 local time the IMC transmitted power increased a bit

Images attached to this comment
Detector Characterisation (Broadband noise)
tacca - 13:29 Saturday 17 August 2019 (46689) Print this report
Comment to Drop of sensitivity (46638)
In the lock after the troubles of the automation of tonight and the restart of all the VPM metatron processes, the ITF working point probably changed. The optical gain increased, the PR coupling decreased and the BNS range increased by a couple of Mpc. To investigate better.
Images attached to this comment
AdV-DAQ (Data collection)
Boschi - 11:41 Saturday 17 August 2019 (46688) Print this report
Comment to DAQ on-call (46685)
Looking at the LockMoni VPM is clear that the problem is more general. I suggest to check the process load on olserver120 machine. A quick check on the suspension side does not show any evident problem.
AdV-DAQ (Data collection)
fcarbogn - 10:39 Saturday 17 August 2019 (46686) Print this report
Comment to DAQ on-call (46685)
There are indeed problems in collecting data from the suspensions.
This can be clearly seen, without involving the Virgo DAQ or DMS by simply issuing: checkSms SatServer
It can be seen that the number of available channels is continuously fluctuating, indicating that those are not collected from the lowest level servers.
Hoping it could help, I have restarted the whole Tango SAT machinery on olserver120, all the application servers, the Starter and the database interface DataBaseds.
This didn´t seem to have changed the situation too much and then I suggested the operator to ask the support of the SAT on-call to diagnose the problem at a lower level.

As mentioned by Loic, those problems manifest themselves not only as missing channels, but also with failed communications with suspension nodes. For example it has happened few times during the week that the PR suspension do not receive the misalign command during the automation sequence and it needs to be misaligned manually before restarting from DOWN.
This is obviously heavily effecting the Automation in various ways with the need to often restart nodes from scratch.
AdV-DAQ (Data collection)
rolland - 9:03 Saturday 17 August 2019 (46685) Print this report
DAQ on-call

I have been called since there were gray flags in the DMS "missing channels", mainly from the suspensions lines (*_Electr in particular).

The data collection itself seems to work fine. I noticed that FbsAlp is complaining much more than usual about too late reply of some devices (probably VAC servers, asked to send data via Cm/Tango interface). FbmAlp is complaining a lot this morning about late frames received from different Metatron nodes.

Large spikes in FbmAlp latency appeared on August 12th, 7h22 UTC, see figure 1 (before), figure 2 (August 12th) and figure 3 (today).

Figure 4 shows some trend channels around that time: channels from the real-time (CAL_NE_MIR_Z_NOISE generated in RTPC and PCAL_power from and adc) and generated in the data collection pipeline (FbmAlp_latency) are always present, while channels coming from Metatron nodes start to be missing sometimes (too late, and rejected by the data collection pipeline at the level of FbmAlp).

I noticed that some Metatron nodes were complaining rather often about "Tango Errors" since a few days (for example for ITF_LOCK: it complains often since August 13th, while it happened only during on day (July 31st) since the beginning of the run). 

 

Images attached to this report
Comments to this report:
fcarbogn - 10:39 Saturday 17 August 2019 (46686) Print this report
There are indeed problems in collecting data from the suspensions.
This can be clearly seen, without involving the Virgo DAQ or DMS by simply issuing: checkSms SatServer
It can be seen that the number of available channels is continuously fluctuating, indicating that those are not collected from the lowest level servers.
Hoping it could help, I have restarted the whole Tango SAT machinery on olserver120, all the application servers, the Starter and the database interface DataBaseds.
This didn´t seem to have changed the situation too much and then I suggested the operator to ask the support of the SAT on-call to diagnose the problem at a lower level.

As mentioned by Loic, those problems manifest themselves not only as missing channels, but also with failed communications with suspension nodes. For example it has happened few times during the week that the PR suspension do not receive the misalign command during the automation sequence and it needs to be misaligned manually before restarting from DOWN.
This is obviously heavily effecting the Automation in various ways with the need to often restart nodes from scratch.
Boschi - 11:41 Saturday 17 August 2019 (46688) Print this report
Looking at the LockMoni VPM is clear that the problem is more general. I suggest to check the process load on olserver120 machine. A quick check on the suspension side does not show any evident problem.
Virgo Runs (O3)
Gherardini - 7:03 Saturday 17 August 2019 (46684) Print this report
Operator Report - Night shift
This night the ITF unlocked from science at 3:19UTC because of a NI DSP failure (see plot); the relock was difficult due to a strange behaviour of the automation:

PR not misaligned at the unlock, INJ_MAIN node in error, NARM_LOCK crashed, ITF_LOCK that go on with the lock even if OMC not locked (2 times, at the second unlock the WI local controls opened by the guardians), many "Tango errors" reported in the metatron nodes and a general slowing down that increase the latency at FbmAlp level, relock in progress...

-guard tours(UTC)
21:45 --> 22:15
0:25 --> 0:55
2:35 --> 3:05
4:00 --> 6:30

DAQ

TFMoni crashed/restarted at 21:06UTC, 1:03UTC, 1:45UTC.


SUSP

At ~4:49UTC the WI local control opened by the guardian following an ITF unlock, properly closed.


Images attached to this report
Virgo Runs (O3)
Montanari - 23:00 Friday 16 August 2019 (46683) Print this report
Operator Report - Morning shift

ITF in Science for the whole shift

Guard Tours
17:32UTC - 18:02UTC
19:27UTC - 19:53UTC

Images attached to this report
AdV-PSL (Pre-mode-cleaner block)
Cleva - 16:17 Friday 16 August 2019 (46682) Print this report
PMC thermal effect follow-up (trigerred by the intra-cavity contamination)

Follow-up of the thermal effect in the PMC.

Thermal effect in the PMC are likely the signature of intra-cavity contamination inside the PMC and as such can be used to monitor the intra-cavity loss over time.

A slow scan of the laser freq vs PMC is used to reveal the thermal effect ( as explained in 44233 ). The scan pocedure details are listed in 45128. Such a scan has been completed during the last maintenance (13/08/19).

Plot 1, 2 give the superposition of the PMC error signal and transmission for the scan made on Jan, Feb, May and August 2019 for the "shrink" scenario (thermal effect contributes to speed-up the pzt scan (see 44233 )

Plot 3, 4 give the superposition of the PMC error signal and transmission for the scan made on Jan, Feb, May and august 2019 for the "Widen" scenario (thermal effect contributes to slower the pzt scan

Rem.: the error signals are possibly offseted in the "y" direction, and mentionned in the legend when releavant

for each plot the time axis has been scaled in order that each PDH fit the duration between the two Lateral Bands (14 MHz), those are taken as a time reference

# plot 5 gives a zoom of the Err signal in the widen config

-> we see some evolution of the PDH shape. Mainly visible in the widen configuration. the PDH shrinks more and more which would mean that the thermal effect get reduced.

This is a quite unexpected behavior.

The trend is 10% reduction wrt the enlargement noticed on 27/02/2019 (the enlargement was associated to few percent losses in the PMC througphut).

Rem.: some simulation are missing to assess quantitatively the relation between the widening and the intra-cavity absorption

Images attached to this report
Virgo Runs (O3)
Sposito - 14:58 Friday 16 August 2019 (46680) Print this report
Operator Report - Morning shift

ITF in Science mode for the whole shift with ~43Mpc.

Virgo Runs (O3)
cleva - 12:46 Friday 16 August 2019 (46679) Print this report
ML glitch triggering EOM glitch

# plot 1&2: we seek for an EOM glitch, we select the one at 5H25MN57S UTC, a thin peak visible in EOM_CORR_raw_FS_min

# plot 3: the selected glitch

- from the calibration on ML_FREQ_Q_MONIT (46666) we deduce a freqency glitch visible in the Master laser of ~20 kHz (*)

# plot 4: the associated frequency amount of the glitch is the derivative of EOM_CORR signal. Assuming EOM_corr features a gaussian shape (exp(-(t/tau)^2) we simulate the freq amount of glitch .

- we got peak to peak of the associated frequency noise of the EOM_CORR glitch of ~ 20 kHz, coherent we what observed in ML_FREQ_Q_MONIT

# remark: the counterpart feedback by the IMC loop on the ML_PZT is 0.5 mV ~ 6 kHz and will not explain what seen by ML_FREQ_Q_MONIT.

Conclusion

This glitch has raised within the ML. A larger one would have produce a fast EOM glitch of a fast unlock as in 46449 . Indeed the latter features an EOM_CORR slope of  ~1V / 30 us, the current one fastest (decending) slope is ~ 1V / 70 us. Speeds are then comparable, missing is the amplitude in ordre to rank it into te fast unlocks family.

 

(*) peak-to-peak glitch ~1 mV (removing the constant slope). ML_FRQ_Q_Monit is phi ~27° (see 46666), associated slope of the MLFM is 0.1V*sin(27°) in V/rd

that is 0.045 V/rd, tha is 0.090 V/(2pi) rd, that is 0.09 V/ 2 MHz

 

Images attached to this report
Virgo Runs (O3)
berni - 6:58 Friday 16 August 2019 (46678) Print this report
Operator Report - Night shift

ITF in Science mode for all the shift.

Guard tours (times in UTC)

  • from 22:50 to 23:20;
  • from 0:30 to 1:00;
  • from 2:30 to 3:00;
  • from 4:05 to 4:35;
  • from 4:57 ...

DAQ
TFMoni restarted at 2:25 UTC after a crash.

Virgo Runs (O3)
Gheradini - 23:00 Thursday 15 August 2019 (46677) Print this report
Operator Report - Afternoon shift
The ITF kept the science mode all the shift.

-guard tours (UTC)

13:05 --> 13:30
15:05 --> 15:30
17:10 --> 17:35
19:15 --> 19:40
Detector Characterisation (Broadband noise)
mwas - 15:41 Thursday 15 August 2019 (46676) Print this report
Comment to Drop of sensitivity (46638)

Indeed the contribution from PRCL has increased, and is seems to be back at the level it was at the beginning of the run.

Figure 1. Shows an offline noise subtraction where channels are considered at all frequencies the inverse of the matrix of transfer function between all the noise subtractions channels is used to be use correlated channels simultaneously (which the online Hrec subtraction is not able to do at the present).  The contribution of PRCL (green) becomes larger than the contribution of MICH at 90Hz, instead of at 150Hz as it was 2 weeks ago.

Figure 2, Shows the ration of h(t) cleaned online and offline in red, the offline cleaning improves the range by about 2Mpc.

This can be compared to figure 3 and 4, which are the same as figure 1 and 2 but for Aug 5 when the range was at ~48Mpc. So better noise subtraction could bring us from ~42Mpc, to ~44Mpc, but there is still 3-4Mpc missing.

Figure 5, compares the offline cleaing between Aug 11(red) and Aug 5 (black). There is clear higher shot noise now, which should be due to poorer optical gain, but this is not explaining the remaining difference in the 80Hz-200Hz bucket.

In any case, switching back the cross-over between SPRB_B4_56Mhz_Q (MICH) and SIB2_B2_8MHz_I (PRCL) subtraction in h(t) from 150Hz back to 90Hz where it was since the beginning of the run, should improve the range by ~1Mpc.

/users/mwas/calib/noiseSubtraction_20190811/noiseSubtraction.m

Images attached to this comment
Virgo Runs (O3)
Montanari - 15:00 Thursday 15 August 2019 (46675) Print this report
Operator Report - Morning shift

ITF in Science for thw whole shift

Guard Tours
05:17UTC - 05:51UTC
07:08UTC - 07:38UTC
08:57UTC - 09:27UTC
10:44UTC - 11:14UTC

 

DAQ
05:25UTC, (VPM) TFMoni process restarted after a crash

Images attached to this report
Virgo Runs (O3)
berni - 6:56 Thursday 15 August 2019 (46673) Print this report
Operator Report

ITF in Science mode for all the shift.

Guard tours (times in UTC)

  • from 22:14 to 22:44;
  • from 0:30 to 1:00;
  • from 2:30 to 3:00;
  • from 4:00 to 4:40;

Air Conditioning
To be reported a strange behaviour, similar to the one reported yesterday, of the INJ area ACS from 2:30 UTC, see attached plot. Experts have been also informed by mail.

EnvMoni
blinking red flag on DMS for NE HALL - ENV_NE_ACC_Z; Irene has been informed by mail.

Images attached to this report
Virgo Runs (O3)
Gheardini - 23:00 Wednesday 14 August 2019 (46669) Print this report
Operator Report - Afternoon shift

The shift started with the ITF in science mode, the ITF unlocked at 13:32UTC because of a BPC failure (1st plot) then we had some proble with the relock: unlocks at SSFS, unlocks at LOW_NOISE_1 and unlocks while locking the OMC1; finally at 15:32UTC the ITF relocked in LOW_NOISE_3_SQZ state without doing anything of special, the last action was stop/restart the ITF_LOCK and OMC_LOCK metatron nodes; at 15:40UTC the weekly calibration started:

 

-calibration measurement (UTC):

15:40 --> 15:54 longitudinal noise injection

15:59 --> 17:31 calibration with the photon calibrators, calibration of the marionettes and validation of h(t) (CALIBRATED_DF_PCAL): during the first attempt the ITF unlocked at 13:57UTC.

17:40 --> 18:05 free swinging Michelson (CALIBRATED_FREEMICH_WINI)

18:40 --> 18:54 measurements for NE,WE mirrors calibration (CALIBRATED_DF_INtoEND_LN2)
18:55 --> 18:57 time delay measurements (CALIBRATED_DELAY)
-WE PCAL timing: GPS 1249844209
-NE PCAL timing: GPS 1249844293
18:59 -->  PR-WI cavity (CALIBRATED_PRWI), failed because of it was not possible to lock the PR-WI in a stable way

 

ITF relocked and science mode started at 19:58UTC.

 

NOTE: also during the calibration and at the last relock I experience some others strange fails at LOW_NOISE_1 and with the ITF_LOCK, that it went on with the lock moving DARM to B1s even if the OMC was not locked and the OMC_LOCK node in TEMP_COMTROLLED state; in this case I tried to pass through the INIT state with ITF_LOCK.

Air Conditioning
This afternoon at around 14:00UTC one of the heater temperatures jumped down of its threshold (2nd plot), Riccardo went to the north and technical building to check the status of the boiler without find any malfunction, situation under monitoring.

DAQ
TFMoni crashed&restarted at 15:42UTC

DMS
periodic check of the shelved flags, report attached.

SUSP
At ~14:32UTC the WE local control opened by the guardian following an ITF unlock, properly closed.

ISC
(14-08-2019 14:00 - 14-08-2019 15:30) On site
Status: Ended
Description: strange problem with the ITF relock.

Images attached to this report
Non-image files attached to this report
Virgo Runs (O3)
cleva, Kéfélian - 21:52 Wednesday 14 August 2019 (46672) Print this report
Glitch at ML level - another one

at 19h25MN19S_UTC: same kind of event inside the master laser as described in 46665.

Images attached to this report
Detector Characterisation (Broadband noise)
ruggi - 19:50 Wednesday 14 August 2019 (46671) Print this report
Comment to Drop of sensitivity (46638)

The excess of noise in Hrec at 100-150 Hz follows the trend of PR to h coupling (fig 1). It was not that with the old version of Hrec (and this is expected, because PRCL is no more subtracted in that frequency range). This can be seen also comparing hoft to a roughly calibrated DARM (fig 2), and computing transfer function and coherence from PRCL to hoft and PRCL to DARM respectively (fig 3).

Images attached to this comment
Search Help
×

Warning

×