[Briana, Ian]
7/18: Started taking data with a 150 mm lens in but messed up wavelength in JamMt and alignment so data is void. However, from the initial look at that data, it seemed like putting in the lens removed the finer dips within larger dips (some of the absorption features are getting lost). However, wouldn't trust this too much because the alignment was bad. When I removed the lens, there was a dip in transmittance (blue circle) but by shifting the photodetector around, you can get back to a level close to the original with-lens measurement, although there is some noise (red circle), (see rip_alignment). This leads me to also think the issue in the last post could also be an alignment problem. However, I am not sure as I don't think I touched anything that would move it out of alignment, but who knows.
7/19: Realigned and measured for the 150 mm lens. First, I scanned across temperature with the lens in place. Then I removed the lens and scanned. Then I put the lens back in its original place, beam profiled the vapor cell, and moved the photodetector to see if I could get the same power level. I measured the beam profile on the active area of the photodetector, right before entering the vapor cell, and right after the vapor cell. The issue was the order in which I did this because it required a lot of putting the lens back to where it was, which might not have been accurate. A more consistent procedure is discussed below. At this point I also put in a 0.2 OD ND filter to not saturate the blue nanoscan beam profiler (didn't find a setting to increase saturation limit in the program).
7/21-7/22: Took data with the 100 mm lens (at 25 Celsius on 7/21, at 40 Celsius on 7/22). This is the procedure going forwards (can be repeated for the different focal lengths):
First, profile the beam. The laser power should not affect the beam profile but for consistency, set the laser current at 110 mA, temperature at 6.755 kOhms. Set the position of the lens and vapor cell (everything is relative to the mirror before the beam enters the vapor cell). Record the positions of where the vapor cell will begin/end, where the lens will be. Beam profile at the beginning of the vapor cell without the lens in place and record the location. Put the vapor cell in place. Beam profile after the vapor cell without the lens and record the location. Determine the location of the photodetector where the waist should be. Ensure that the beam profile you measure at the photodetector is within the 3.6 mm by 3.6 mm aperture and record the location/beam profile. Put photodetector down and mark the location. Slightly tune the positioning of the photodetector until you can see good dips (this was largely unsuccessful, I'm not sure how the tuning works). Do the scans/measurements (NO LENS). Put the lens in place. Confirm that the light falling onto the photodetector is the same as without the lens by checking alignment. Do scans (WITH LENS). Put in the beam profiler where the photodetector is (keeping the lens in place), which will require removing the photodetector. Then, beam profile after the vapor cell. Remove vapor cell and beam profile right before the lens.
Using Torrey's beam profiling code and Jammt, you should be able to determine the beam size reasonably accurately. I was running into issues where the beam size I measured differed by >3 mm from JamMt because I should have picked 'thick lens' on the JamMt settings (not thin lens). With this fixed, there is a ~1mm difference in beam size than expected. Could be explained by error in distance measurements. Moving forward, I'll verify the distance measurements more accurately to see if this is the issue. Because I was having trouble with JamMt, I had started profiling the beam in the sequential manner, so I think the values for the beam size are correct, but just are not verifiable with JamMT.
Results:
The dip depth comparisons from all the data are shown here with the measured beam sizes (DipDepthComparison.png). Even at a different vapor cell temperature, the smaller beam size decreases the strength of the absorption dip (the 150 mm lens produces a smaller beam size by about ~1 mm). I want to justify this by saying that a smaller beam size means photons will interact with a lower number of atoms so less photon absorption occurs (in contrast, a larger beam means photons are more likely to be absorbed because they interact with more atoms). I feel like you would be saturating the atoms by bombarding them with photons, but I'm not sure if there are too many atoms for this to occur (spontaneous emission process is too fast?). Something I'll look into/calculate.
Also, if you plot the relative dip depth (1 - background voltage / signal voltage), you see that it decreases as you increase power, which would make sense as the increase in absolute dip depth is not enough to overcome the larger increase in background power (relative_dip_depth). This could also explain the saturation effect that we see (dip depth becomes less pronounced once you increase power too much).
From the 7/19 plots, it seems like there is a power limit before the dip depth begins to decrease, which is due to some saturation effect. I want to say that this is a saturation limit on the photodetector, but as Ian pointed out it could be an issue with the Moku, so I should measure the power before reaching the photodetector to confirm. Based on the manual, if the photodetector is reading over 5V for a 50 Ohm load (which is what the Moku is set to), then it will saturate, but the output has consistently exceeded 5V output reading and has still retained an increasing dip depth. Should double check settings.
Smaller absorption features (on both dips) are still getting lost. If you get the right alignment, you can see the dips. Maybe the photodetector needs to be a certain distance away to register it but I'm not sure why/how since all the light is getting concentrated onto the active area anyways.
The way I measure the dip depth is to take the ends of the temperature vs. reading plot and fit an exponential to it (because the shape of the temperature curve sometimes trails up towards higher temperatures and is not a linear background). After subtracting it, I get something like the second slide. subtracting_background_example.pdf. Before I was doing it by hardcoding indices, so this should be better and easier to verify.
See picture of the setup for the measurements. I plan to do the same with a 75 mm lens for a more drastic difference in beam size. Before that, the laser output will be collimated using the fiber collimators at the point of the alser. Ideally, light exiting the laser would be collimated. This makes it easier to mode match the beam, which we will need to do eventually for the probe/pump.
Miscellaneous:
Connected CTL OUT of TED200C (laser temperature control) to input 3 of Moku (labelled Paige Bueckers). Must set the CTL OUT impedance of at least 10 kOhms from the TED200C manual, otherwise it will not register so on the Moku an impedance of 1 MOhm was set for the channel of CTL OUT input. This output is a voltage proportional to the actual temperature. The voltage range is 0 to 10 V with the conversion coefficient of 2 kOhms/V (using TH 20 K sensor). Using this signal instead of the previous triangular wave signal, which is more reflective of the actual temperature, did not solve the shifting issue: the frequency still has to be low to produce a not obvious shift. I thought this could be because of cable length differences between the CTL OUT BNC cable and the photodetector BNC cable, but probably not since electricity travels extremely fast. I also think that the temperature controller has asymmetry since cooling the laser may not have the same efficiency as heating it (this could be something with the PID controller of the laser temp controller).
I saw this page and thought it was interesting how a hot spot on the laser looks kind of like an inverted absorption dip. I didn't know lasers could do this- it probably shows up in the beam profile.