[Jeff, Daniel]
We dissassembled the 8" to 6" reducing tee from a ~20" long tube (with 6" CF Flange ends) and wrapped the open ends in UHV foil. We also took off two 6" CF flange gate valves from two of these tubes and wrapped the open ends in foil. We put everything except the reducing tee away into the white cabinet with other Holometer equipment. The reducing tee is on the table by the central vessel.
[Jeff, Torrey]
We performed an initial lock of the second output filter cavity, and we eliminated the the 890Hz noise issue by putting the LNA10 Oscilloscope Preamplifier as an input buffer to the seeder DC modulation input.
The qualily of the lock with the laser is approximately as good as in B102. There is a 115kHz resonance with a high gain controller.
We performed a lock with no amplifier by manually scanning the laser until we saw flashes, then activating the piezo loop. This loop was limited by a 2.6kHz resonance with a high gain controller
The EVAL-ADHV4702-1 provides a gain of 21V/V, allowing us to scan on the piezo and lock. The 15V power supplies from Amazon are very bad and result in unacceptable 120Hz noise in the amplifier output. We switched to using a Newport 15V supply and this problem was eliminated. The lock with the amplifier is of a comparable quality to the laser lock, but it is limited by a 25kHz resonance. We will take data soon to better characterize the loop.
Both cymac front-end machines have been installed in the server cabinet in the control room (B111B east) with an ethernet connection to the wall and have openssh-server installed. They have been added to the local DNS, so you can ssh into them using
ssh cymac@turing.lab
ssh cymac@babbage.lab
I have also added the mini PC in the control room, "brewster" to the DNS, but cannot manage to ssh into it.
[Ian, Torrey]
While working in the lab, the ND filter attached to the power meter would not slide into place. Upon inspection we noticed that the sensor was slightly askew . We took it apart and found that the glue used to glue the sensor to the plastic back had come apart (they used a miniscule amount for some reason). We glued the sensor back to the plastic casing and cleaned the ND filter. Put things back together and things are working has intended.
Additionally we went to look at the second power meter Alex uses and the ND slider was not working on that one either. We will repair this one as well, have not gotten to it yet.
Took apart and re-glued the one alex uses. It is drying under some alluminum foil in B111B for the time being. Will reassemble Monday.
[Jeff, Ian]
continuing from [11998]
After following this procedure to configure the BIOS settings, the computer did not boot. This was because during step #1, resetting all the settings, the boot order was modified so that 'Boot Option #1' was set to [Hard Disk: (Bus 01 Dev 00) PCI RAID Adapter]. After changing 'Boot Option #1' to [UEFI Hard Disk:debian], the computer boots as expected.
[Torrey, Ian, Daniel]
We placed the PRC and Demonstrator IFO Vacuum Cubes on the optics table and secured them to the table with 1" long, 1/4-20 screws on each corner. To do this, we placed the cubes in the Northwest corner of the optics table and pushed them into place. For the PRC, we used some used clean room wipes to reduce the friction. The IFO was close enough that we just pushed it. The aluminum is able to slide on the breadboard somewhat easily.
Torrey and I then removed the remaining 5" long, 8" diameter bellows and a 10" to 8" zero length reducer flange from the IFO vacuum cube so that the OFC 1 sled didn't have it in the way. We covered the exposed end of the 10" to 8" zero length reducer flange and the cube in foil.
It's now time to start replacing flanges. I added a tab on this google sheet to track the 10" flanges. It seems that we have almost everything we need to make a functional Laser Filter Cavity (LFC) and demonstrator IFO vacuum chambers. I think we need 2 more CF Flange 2.75" 1550 nm AR Coated Windows for the IFO.
I updated some of the plots in the buzz code for the ASC Solver folder when making a system as described in the previous log post. The new plots come with color coordination so that it is easy to see which lines should overlap. The first plot (SPOFF_diagnostic_bode.pdf) shows the large system plotted against the original parts that were used to make large system. The short story of this plot, and rally both of the plots, is that all of the same color lines should overlap. When each part is added to the larger system it should still have the same frequency response. This plot offers a quick look at all of the parts to make sure the decomposition of the parts is working correctly in the makeSPOFF() function. The second plot (FOM_bode_from_SPOFF.pdf) does a very similar thing but with the FOMs where it plots the individual state space systems of the FOMs then puts them into the large SPOFF and plots them again from there. Ideal these should line up which they do. The third green line on those plots shows the FOMs that have been pulled out of the SPOFF after the SPOFF has been balanced using the ssutil.balance_sys_gain() function. This is to show how the gain balancing is affecting the overall numerical limitations of the system. You can see in the BNS FOM that in the balanced system the BNS FOM can not go as low in magnitude. It hits a numerical wall earlier than the un balanced versions of the BNS FOM.
I tried to run the HiB solver with the pre-balanced version. (i.e. balanced before it was given to the solver) but is showed similar results with one exception. The range calculated from the closed loop PSD seemed to give less wild values. As you can see in the third figure (rms_HiB_range_from_PSD.pdf). The values dont make sense but they at lease don't seem random.
We have recieved our thorlabs cart but encountered a few hiccups that Daniel and I will address:
Notes from working in lab:
-With the amplifier not amplifying (just nominal ~300 mW output state) the REFL 1550 PD for OFC1 needs a lower ND filter on it. I've swapped it. Note that this should be swapped back BEFORE turning the amplifier on. I've put a note on the amplifier as a reminder.
-The error signal size is quite small. Had an idea from something Lee said a while ago. When we were setting up the new space someone saw the tank circuit box hanging off the EOM and suggested adding a small cable to it. Increasing the length between the inductor and the EOM significantly decreased the error signal amplitude. I've removed the cable. (pics attached)
-In the move I mixed up which tank circuit is for which EOM. They have been identified and labeled. Both tank circuits are on their respective EOMs.
-The error amplitude (with +24 db input boost on the laser lock box) is now ~100 mV and the RMS of the error signal when the cavity is laser locked is 10 mV.
-Went to start 775 OFC1 alignment but the AOM for this path is not deflecting a beam now. I don't know the cause.
Focusing on other things until AOMs are diagnosed. Afternoon work:
-PDs are powered for OFC2
-BNCs are plugged into patch panel and Moku 2.
-Looking for flashing in OFC2, can change which moku scans the laser freq by swapping patch panel locations. Much easier than before.
-775 path EOM has been optimized to give max error amplitude.
-OFC2 has been recovered and is fully functional (both paths).
Plan is to fix the AOM for OFC1, align 775 for OFC1, and then start thinking about OFC3. In B102 my moku systems were a little wonky, with the patch panel and recabling from scratch each filter cavity has its own moku now. OFC1 = Moku 1 / OFC2 = Moku 2.
[Torrey, Ian]
General progress notes:
-The three photo diodes required for OFC1 activities are powered.
-All cables to and from the moku/patch panel are set up for OFC1 activities, including PDs, EOM, and DC modulation for the laser.
-1550 path is realigned and can be locked with the laser.
Problem:
The ~900 Hz signal that arrises from having a cable plugged into the DC modulation port of the laser needs to be solved. This is a known problem from when we were in B102 but it seems much worse over here. Here is what I know.
1) Light is blocked. DC modulation port plugged in. ~890 Hz signal observed on PD. This is not optical and must be electrical.
2) Light blocked. DC modulation port is unplugged from the patch panel. Oscillations are gone. To clarify, I am unplugging it here: PXL_20241202_193738276~2.jpg
3) This interferes with the error signal enough to not give a good laser lock.
4) The oscillations can be seen when the channel is DC coupled and light is hitting the PD.
5) Both the laser and the photo detector are plugged into a blue outlet, but not the same blue outlet.
If anyone has any insights on this let me know. I am curious to see if the PD is plugged into a white outlight if this makes a difference. Lee has suggested a type of circuit to fix this potential grounding issue in the past although I am blanking on the name of it.
After some frustrating moku glitches and then some realignment, the second output filter cavity now has 775 light. Pending both 775 paths, and 1550 in OFC2, we should be ready for filter cavity science again.
[Sander, Daniel]
We routed a USB Extension Cable from the vacuum oven in B110 through B111D to the B111B control room. The cable trays above the mobile clean rooms in B111B were very hard to access. Maybe we should get some sort of cheap grabber tool from Amazon.
The cables make it to the computers in B111B but just barely. A ~6 ft USB extension cable would ensure there is plenty of slack in the system.
The computer recognized that something was plugged in, so it seems the extension cables work. Next week, I will finish setting up the vacuum oven software that I downloaded. I previously got this software to work on my laptop.
The computer is able to communicate with the vacuum oven after very minimal software setup.
I added another 50" USB extension so that there isn't tension on the cables. This is ~45" too long, but the extra cable is not in the way and the oven is able to communicate with the computer and vice versa.
I have checked all visible lab ethernet ports and compiled the following csv file that shows all ports are now on the same subnet. All ipv4 addresses pulled by my laptop returned 192 as the subnet address.
Thus we should be able to use dns on all instruments and connected devices now.
There may need to be changes in the active DNS naming on Pihole to correct any instruments that were previously on a switch that was not on the correct subnet.
Please take a look at the attatched csv and you can access my code for grabbing ipv4 addresses and storing them in a csv as such here.
(To run code open a terminal, navigate to file, enter "python3 ipv4_grabberV2.py" and follow the instructions. You may need to install Pandas if not already installed)
In my sporadic work-from-home I'm fixing up some of the computing infrastructure.
We now have incremental backups for the files in the nextcloud. If you lose something particularly important, we can recover them. I haven't made a means to do this as a user yet (namely, read-only access), but might at some point.
It will not back up files larger than 1Gigabyte - beware that detail.
Also, if you have long-lived services that you want to monitor (like backups). You can use the https://healthchecks.mccullerlab.com/ service. It is watching all of the backups of the web services and nextcloud. It is running the same service as at healthchecks.io. It just watches for pings to http address on a schedule.
This will be used to back up and monitor the various lab machines as well. That is nearly set up.
In order to connect the Power Recycling Vacuum Cube to the Central Vessel, the heights of the center flanges must be close (more tolerance with the use of bellows). I measured the distance between the bottom of the 10" side length vacuum cube and the custom aluminum base to be between 0.97" and 0.99", with an average of 0.976" (nominal thickness is 0.97"). I measured the custom aluminum base to be between 0.5" and 0.52" (nominal thickness of 0.50"). We will likely need a 0.01"-0.02" sheet to bring up the cube bottom to 1.5" above the optics table. We have shim stock to do this.
[Ian, Jeff, Sander, Daniel]
Ian and I removed the 10" to 4.5" zero length reducer from the vacuum cube that was the "elevated" Holometer bend cube so that we could loosen a screw so that we could later get the "elevation part" off that elevated the cube. We then installed a new copper gasket and tightened the zero length reducer "normally" to 34 Nm with 1.75" long screws (I've started testing before removing the old base to make sure these screws go in all the way.
Ian, Sander, and I then lifted the cube and loosened the last screw holding the elevation part to the base flange and removed this part. We then set the cube down and tilted it over.
Jeff and I replaced the base flange with the custom flange I made and custom aluminum base to secure the cube to the table. I tightened all the 1.75" long screws to 34 Nm.
This cube can be flipped over and placed on the table.
I am following this DCC document to configure the BIOS settings of Turing, one of our front-end machines.
sudo dmidecode -s baseboard-product-name
X11SRL-F
sudo dmidecode -s processor-version
Intel (R) Xeon(R) W-2245 CPU @ 3.90GHz
The BIOS version is 2.8.
First, I reset all BIOS settings to default using 'Restore Defaults' in the 'Save & Exit' menu.
Advanced > CPU Configuration: Hyper-Threading [ALL] - Disabled
Advanced > CPU Configuration > Advanced Power Management Configuration > CPU P State Control: SpeedStep (Pstates) - Disabled
Advanced > CPU Configuration > Advanced Power Management Configuration > CPU C State Control: Enhanced Halt State (C1E) - Disabled
Advanced > CPU Configuration > Advanced Power Management Configuration > Package C State Control: Package C State - C0/C1 state
Advanced > PCIe/PCI/PnP Configuration: Above 4G Decoding - Enabled
IPMI > BMC Network Configuration: Update IPMI LAN Configuration - Yes
IPMI > BMC Network Configuration: Configuration Address Source - Static
IPMI > BMC Network Configuration: IPv6 Support - Disabled
I will need to follow up about the Station IP address. What is the local CDS admin LAN? Will we be doing this? For now I have set the static IP address to be the same address that was DHCP assigned.
NOTE: make sure that debian is selected as Boot Option #1 in the boot settings, this may be changed by resetting all settings