Monday, December 12, 2011

Machining with Ultrafast Pulses


(From Raydiance Inc)

As someone who has been trying to design novel ultrafast laser systems for the past eight years, my eyes were drawn to the title "Applications of Ultrafast Lasers" of Dr. Mike Mielke's talk from Raydiance, Inc. from the awesomely overwhelming list of invited speakers at CLEO 2012. Dr. Mielke's talk is one of a handful in CLEO's new Application and Technology conference which debuted last year in Baltimore in order to better bridge the gap between fundamental research and product commercialization.

To see what background information I could potentially find, I went to Raydiance's website to find a wealth of information on micromachining and a host of video shorts of ultrafast laser micromachining in action. They are so pleasing to watch, I couldn't help embedding many of them in this post.

Micromachinging with ultrafast lasers allows the removal of material without the introduction of heat (see the video above of laser micromachining on a match head without it igniting). Ultrafast lasers therefore give the advantages of laser machining- tailoring submicron features on the workpiece, without thermal collateral damage. For example, if you are going to have your dentist drill a tiny hole in one of your teeth (see the figure below) , you'd rather have her use the 350 fs laser shown in b) rather than 1.4 ns laser in a) in which the heat generated damages and fractures the tooth.



(Above: Drilling tooth enamel with a) 1.4 ns 30 J/cm2 laser pulses and b) with 350 fs 3 J/cm2 pulses. From B.C. Stuart et al LLNL.)


This is because drilling with the femtosecond pulses relies on an entirely different physical process for removal of material than nanosecond pulses. For long pulses (> 100 ps), photons are absorbed by the material and converted into heat. This eventually fractures, melts, or vaporizes material at (and nearby) the laser focus. On the other hand, if the pulse is fast enough (< 1 ps), the material is removed solely by photo-ionization. Rather than dumping energy into the material, electrons of target molecules are stripped off by the intense electric field of the pulse. No absorption takes place and therefore no heat is generated.

Because the mechanism for material removal using ultrafast pulses does not depend on the material properties as it does for thermal ablation, such as the melting point, conceivably any material can be machined using ultrafast pulses. This has allowed Raydiance to micromachine polymeric materials for manufacturing next-generation vascular stents and microfluidic devices (see the videos below).


(From Raydiance Inc)

Though micromachining using ultrafast lasers is not new, doing so in a robust workstation-platform is. Raydiance touts to have created the first "industrial grade" femtosecond laser platform. They have an impressive record and a current partnership with ROFIN GmbH for the development of industrial-grade femtosecond laser micromachining workstations. In the literature on their website they state, "A laser is not a solution. It might be the engine of a solution, however, 21st century manufacturing floors demand more: software integration, beam delivery, motion control, and visioning systems." As an "engine builder" myself it is helpful to know just what kind of engine is the most useful to workstation integration. Sometimes "engine builders" get caught up in making Formula One cars when what is most helpful is a reliable Hyundai sedan. Although not any pulse width, energy, and rep will do for athermal ablation, neither will a workstation without robust, continuous (thousands of hours 24/7), turn-key operation.


(From Raydiance Inc)

To that end, Raydiance's core platform, Smart Light, can simply be adapted (mainly turning down the power) for non-machining applications in defense and security such as remote sensing of hazardous chemicals and LADAR. Dr. Mielke's invited talk will likely emphasize Raydiance's pursuits in these areas since his talk is in the Government and Security subcategory. I will be interested to see what wavelength tuning options, wavelength conversion, or different center wavelengths Raydiance may be investigating for threat detection since Smart Light currently resides in the telecom C-band near 1550 nm and many absorption lines for molecules of interest live in the mid-IR. Until then, I hope you will enjoy, like me, these videos of lasers "vaporizing" material and leaving beautiful designs for very practical applications.

Wednesday, October 19, 2011

Using Soda Cans to Beat the Diffraction Limit

(Above: Setup of the metalens (soda cans) used to focus a sound wave to a size of 1/25 th of the wavelength of the waves used to generate the beam)

Professor Mathias Fink from ESPCI ParisTech and Institut Langevin doesn't fit the typical profile for a plenary speaker at an optics conference, which is precisely why why you won't want to miss his plenary talk at CLEO 2012 this May. Though acoustics is the consistent medium for his work, his research more broadly consists of understanding the nature of waves and how to get around the limits assumed by our conventional understanding, such as diffraction-limited focusing and imaging. Much of professor Fink's work since the late 1990's has been using time-reversal, the subject of his upcoming plenary talk, to achieve these ends.

For example, in the August 5, 2011 issue of Physical Review Letters, Fink and collaborators demonstrated that they could focus a sound wave to 1/25 th of the wavelength of the waves used to create the focused beam. Ironically, this novel feat was obtained using very conventional objects- soda cans and computer speakers.

The MacGyveresque experiment shown in the figure above uses a grid of soda cans, a group of subwavelength acoustic resonators, to act as a "metalens". When illuminated with a broadband field, this metalens allows subwavelength detail in the near-field to be encoded onto propagating waves. Essentially the metalens is a very good evanescent-to-propagating-wave converter, "unsticking" evanescent waves with subwavelength detail that are typically locked to the surface of the object (or source) of interest. This phenomenon is analogous to the generation of surface plasmons in near-field microscopy (see the August 16th post below). The propagating waves, now containing subwavelength information, can be detected in the far-field and time-reversed (essentially run backwards) in order to focus to subwavelength spots.

Time-reversal essentially amounts to phase-conjugation. However, unlike optical phase conjugation, time-reversal is broadband. Rather, time-reversal is phase conjugation for every frequency at once.

In order to experimentally employ time-reversal, one needs a time-reversal mirror (TRM). For an acoustic wave, a TRM is essentially an array of piezoelectric transducers spread over a surface through which the wave of interest propagates. Each transducer records the wave at its unique position and then is made to play back the time-reversed copy such that the each wave retraces its complex path back to the source. Professor Fink and collaborators first demonstrated the power of time-reversal in the mid 1990's when they focused sound to a much smaller spot size than allowed by the aperture of the transducer array producing it. They discovered that when the source was allowed to scatter many times off of a random array of steel rods, they could reverse the signal such that it came back to a smaller spot size than the original source. The long path lengths from multiple scattering effectively widened the focusing aperture. When they removed the steel rods, they could only focus to the predicted size limited by the aperture of the transducer array.

In a 1997 physics today article, Fink explains time-reversal using an analogy of an exploding block:

"If we want to reconstruct an exploded block from the various scattered pieces, a time-reversal mirror would be a device that precisely reverses the velocity of each debris particle as it crosses a closed surface surrounding the initial block. But before being sent back, each particle must be held for an appropriate delay time: To reconstitute the block, one has to send back first the slowest pieces, which had arrived last."

Time-reversing an exploding block is of course thermodynamically impossible, however, for waves which can be described completely by a limited amount of information, it is reality. The strangeness of the multiple scattering experiment performed by Fink et al in the 1990's, and current experiments, is that it is as if the exploding block is being time-reversed to be put back together into a block that is smaller than the original.

So what about time-reversal for optics? Subdiffraction focusing and imaging in the optical domain have already been shown using a variety of techniques without time-reversal (for example, see Frank Kuo's September 10th post). However, two recent articles by McCabe et al, and Vellekoop et al show the optical analog of Fink's 1990's work, in which a highly scattering medium combined with time-reversal (via spatial light modulators) can be used to enhance an optical focus. Another recent work by Xu et al from Washington University shows a technique called Time-Reversed Ultrasonically Encoded (TRUE) focusing in which only the encoded portion of light from a microscope focal volume is time-reversed back to the sample for clean focusing. In this case the time-reversal mirror consists a holographic technique using a photorefractive crystal to a phase-conjugate of the right bit of light back to the focus.

I'm not only looking forward to Fink's plenary talk to learn about other uses of time-reversal in optics, but to generate ideas of what other wave phenomena may be borrowed from fields like acoustics, microwave communication, and quantum mechanics and visa-versa. After all, it's just the same wave equation.

Sunday, October 2, 2011

Call for Papers

October 1, marked the official call for papers for CLEO 2012 in San Jose, CA. I've decided to include a recurring gimmick in my past blog submissions for CLEO- a countdown clock. Mark your calendars for December 5, for submitting contributed work- there is still a good 63 days left to collect good data, put finishing touches on new instruments, or simulate new phenomena.

The official CLEO website has already posted plenary speakers. You can visit Expocad which will take you through the expo map, giving you booth information as you hover your mouse over different areas of the map. Stay tuned to this blog and CLEO's other various social media in the lower right-hand corner of the main page for the latest information for authors, attendees, speakers, students, and exhibitors.

Friday, September 30, 2011

Photonics for Global Health

(Left: Reflection images of a histopathology slide corresponding to skin tissue using a low-cost, portable, lens-free off-axis holographic microscope. Right: Conventional reflection-mode microscope image of the same specimen using a 4X objective lens (NA: 0.1). Image from Biomedical Optics Express)

Research performed in the Ozcan group at UCLA holds a unique place in the field of optics and photonics. Besides the typical pursuit of advancing optical technology, another major initiative of this photonics group is solving problems of global world health, particularly in resource-poor countries.

Early September marked a milestone for the UCLA group as they published work on a compact, low-cost (~$100 USD of parts), dual-mode microscope with 2 micron resolution in Biomedical Optics Express (also written up in a recent OSA press release). The key to making such a low-footprint, low-cost, lab-grade device is using holographic microscopy. The image information stored in a hologram (the interference of the reflected or transmitted light from the specimen with a reference beam) requires no lenses, drastically reducing the weight, size, and overall expense of the device. A computer reconstructs the wavefront reflecting from (or transmitting through) the sample instead of a lens (see fig below). The impact to world health will be increased blood-diagnostics, water quality tests, tissue screening and analysis, and other imaging diagnostics in areas where microscopes currently are not available due to cost and/or remoteness of location. Getting more microscopes into the hands of health workers may have large impacts for heading off disease outbreaks as well as treatments for individuals.
















(Schematic of the 200 gram microscope developed by the Ozcan group in reflection mode. LD: laser diode, PH: pin hole, BC: Beamsplitting Cube. Note the two AA batteries as the power source as well as for scale. Image from M. Lee, O. Yaglidere, and A. Ozcan, Biomedical Optics Express, 2, 2721 (2011). )

The idea of using holograms in microscopy is not new. In fact it was the quest for higher resolution in electron microscopy which prompted Dennis Gabor to devise wavefront reconstruction by holography in 1948. Gabor coined the word "hologram" which translates "whole message" to emphasize the amount of information that is stored in this very special interference pattern. For a brief history of holography from its roots in microscopy, its development through radar, and its boom in mainstream art and media in the 60's and 70's , see Jeff Hecht's 2010 OPN article.

What makes the Ozcan group's work so special is not the use of a fundamentally new technique, but clever and impressive engineering. This holographic microscope is small, inexpensive, and can work in both transmission and reflection mode. The transmission mode of the current device is similar to an earlier work by the Ozcan group- a cell-phone microscope. In the summer of 2010, the UCLA group published work in Lab on a Chip demonstrating a clever attachment to an ordinary cell-phone which could convert it into lab-grade microscope (see the youtube short below). By employing digital holographic microscopy, the group was able to produce a 38 gram attachment without any lenses, lasers, or bulky optics, which when incorporated with the cell phone camera, produced hologram on the cell phone detector array. The idea is that the hologram data would be sent over the same cell phone to the closest hospital/analysis station, a computer would process the hologram to extract the image information, and then the image would be sent back to the same phone, all within seconds of placing the sample to be analyzed into the device.



Though the current device cannot be so easily integrated onto a phone, the additional benefit of reflection-mode operation makes up for its "bulkiness." By operating in reflection-mode, the new microscope is additionally suited for imaging optically dense media like tissue, something not possible using in-line transmission holography due to spatial distortions in the reference wave. The developers decided to keep a transmission-mode an option, however, since it produces a larger field-of-view then its reflection counter-part and is easier to align and operate.

Once again a computer is needed to reconstruct the image from the hologram. However, hologram data could be sent to the nearest processing center if the field-worker is not carrying a laptop already. My thoughts immediately lead to the computer produced by Quanta through the One Laptop per Child initiative (OLPC). The XO laptop costs approximately $200 and can run on power sources such as solar, human power, generators, wind or water power. Though the aim of OLPC is to close the digital gap for children of resource-poor nations, I wonder if an XO equivalent could be developed to bridge the gap in digital medicine, not just on a records basis, but for data acquisition and processing for field-portable medical instruments like the microscope produced by the Ozcan group. I can imagine this $100 microscope interfaced with a $200 laptop.

What is exciting about this work is that it underscores the beauty and power of cross-discipline connections. Though lensless holographic microscopy is not new, using it as the foundation for a low-cost, field-portable devices is. To learn about more innovations like these, be sure to visit sessions this spring at CLEO Applications and Technology: Biomedical (just in its second year) in San Jose.

Tuesday, August 16, 2011

Year of the Plasmon

(Left: August Cover of Nature Materials showing Liu et al s work on gas sensing using plasmonic response from a triangular nanoantenna. The work in the Nature article was expanded from that presented in CLEO 2011, postdeadline session)

This year may not be a flush for the market but it is looking good for plasmonics. Expansion of the the work shown in CLEO 2011, Postdeadline paper "Nanoantenna-enhanced gas sensing in a single tailored nanofocus," from Na Liu et al. just took the August cover of Nature Materials (see the figure above). Additionally, plasmonics has had a solid recent run of the main-stream physics circuit after the publication of two Physics Today articles earlier this year in February and July.

The July issue of Physics Today features an article by Lukas Novotny from University of Rochester in which he reviews near-field optics, the broader category where plasmonics resides. Earlier in the year, Mark Stockman of Georgia State University wrote a very accessible and informative article on nanoplasmonics that took the cover of the February issue of Physics Today. The cover shows a 13th century stained glass window of Sainte Chappelle in Paris whose yellow and red brilliance are assumed to come from nanoplasmonic resonances of silver and gold nanoparticles in the glass. The optical effect of how the red changes over the length of the window is said to have purposely been designed to mimic the flowing blood of Christ.





(Above: Sketch of Edward Synge's pr
oposed near-field microscope. The red dot denotes the gold nanoparticle. Picture from L. Novotny, Phys. Today, 64, 47 (2011))

Novotny's July article also offers a romantic insight into the history of near-field optics and plasmonics. Novotny, recounts how in 1928, Edward Synge wrote a "prophetic letter" to Einstein proposing a near-field microscope (see Figure above) to optically image a biological sample below the diffraction limit. Synge's proposed microscope, which could not be realized until 1982 (by Dieter Phol's group at IBM of Switzerland), looks eerily familiar to current techniques used for the development of plasmonic devices and sensing- the use of metallic nanoparticles to generate surface plasmons in order to enhance a probing optical field. The two Physics Today articles are must-reads for those who need a crash-course on plasmonics.

A plasmon is created when the electrons on a metal surface are periodically displaced with respect to the lattice ions by an external, driving, optical field, creating an "electron oscillator." The frequency of the surface plasmon depends not on the driving field, but instead upon the restoring force and effective mass of the electrons in the metal. Changing the size and geometry of the metal structure will alter the restoring force and thereby the plasmon frequency. Using metallic nanostructures of the right size (smaller than the skin depth of the metal but bigger than the distance an electron moves during on optical cycle, 2-20 nm) the electric field due to the plasmon becomes highly localized in the immediate vicinity of the outer surface of the nanostructure (see the Figure below). By coupling the surface plasmon to propagating optical radiation, nanoscale information from the plasmon can be encoded micron-sized optical waves as it is in near-field microscopy. The highly localized field can also be used for a number of sensing techniques like SERS by which the interaction of a probe beam with a molecule is significantly enhanced due to the presence of nearby nanostructures. The cover article from Nature Materials uses a standard plasmonics approach by using redshifted plasmon response itself from a gold triangle structure for ultra-sensitive detection of hydrogen.





(Above: Diagram of plasmon dynamics on a 10 nm silver nanosphere. Eo represents the external light field, the black arrows represent the electric field from displaced electrons, the plasmon field, and the red arrows show the field inside the sphere. Picture from M. Stockman, Phys. Today, 64, 39 (2011)
)

The exciting field of plasmonics has applications and positive repercussions in other fields as well. Tumor cells have been found to readily take up nanoparticles. By illuminating tissues with non-lethal IR light, the heat generated from enhanced local-fields of the high-Q nanostructures selectively kills cancer cells. Plasmon-enhanced solar energy conversion entails using metallic structures to better localize light for solar concentration. The opening tutorial, "Solar Energy Applications of Plasmonics," by Professor Harry Atwater of Caltech in CLEO:QELS session "Frontier Applications of Plasmoincs" during CLEO 2011, addressed this burgeoning new field.

There is no doubt that CLEO 2012 will host a number of technical and invited talks, both fundamental and applied, on the subject of plasmonics. After reading the Physics Today articles, I think I will have to add a lecture or two on plasmonics in my junior-level E&M class this fall. I will definitely have to attend some plasmonic talks next CLEO to learn more about this extremely interesting work that saddles fundamental physics and cutting-edge applications.

Tuesday, July 12, 2011

Are we Entering a Solar Boom?


















(First Solar employees working on the 21 MW solar power station in Blythe, CA in the Mojave Desert. The project was completed in December 2009. Photo from cnet News; originally from First Solar. First Solar just received $4.5 billion in DOE loans to build three new stations in the Mojave Desert whose total output will be 1.33 GW)

This summer seems to be marked by a frenzy of solar energy initiatives and development. The Business News section in the May issue of Nature Photonics reported on four recent major investments in solar technology manufacturing: JA Solar of Shanghai plans to build a 3 GW capacity plant in Hefei, China for the manufacturing of monocrystalline silicon solar cells. Investors have pledged $2.05 billion over the next four years, and production is slated to begin in 2012. Polysilicon Technology Company, a joint venture between Mutajadedah Energy of Saudi Arabia, and KCC Corporation of Seoul will build a $1.5 billion facility to produce solar-grade polysilicon in Jubail, Saudi Arabia by 2017. The Indian government is discussing a joint venture with nanotech company, Rusanano, of Moscow to obtain a consistent supply of silicon for Indian photovoltaic manufacturers with hopes of obtaining 2,000 tons of silicon ingots for solar cell production. And SoloPower of San Jose was guaranteed $197 million from the U.S. Department of Energy (DOE) to build a plant in Oregon for the manufacturing of flexible copper-indium-gallium diselenide (CIGS) for light-weight solar panels.

Though CIGS are not as efficient as crystalline silicon, like other thin-film technologies they reap benefits of inexpensive fabrication and production when compared to silicon cells. CIGS need only a fraction of the material to absorb incident photons. They can be manufactured in large-area, automated processes unlike more time-intensive and expensive ingot growth used for silicon. The flexible thin-film technology allows SoloPower's CIGS panels to be 75% lighter than traditional panels for less expensive and more practical installation on industrial roof-tops.

The DOE made even bigger news for solar energy investment, however, at the end of June when it promised $4.5 billion for the construction of three different California photovoltaic power plants: Antelope Valley Solar Ranch 1, the Desert Sunlight Project, and the Topaz Solar Project. Arizona-based company First Solar, Inc will sponsor all three projects, constructing each solar array with cadmium telluride (CdTe), thin-film photovoltaic modules. Together, the new power plants will provide 1.33 GW (powering the equivalent 275,000 U.S. homes) and offset the generation of 1.8 megatons of carbon dioxide. As described by Alexis Madrigal, author of "Powering the Dream: The History and Promise of Green Technology," (Da Capo Press, 2011), in a June 17, interview on NPR's Science Friday, the Mojave Desert solar plants will prove to be particularly effective when compared to other green initiatives. One reason for their effectiveness is their location- the solar plants will be simultaneously near large population centers, L.A. and Las Vegas, with ideal conditions for sunshine- the desert. This is in contrast to wind energy where ideal locations for wind farms often correspond to areas with low population densities (like the plains of North Dakota) and so power distribution becomes an issue. Additionally, the sunlight in the desert suits itself to matching peak output of the solar grid with peak usage- as everyone cranks up the air conditioning at the hottest time of the day, the PV modules are cranking out the most amps.















(Time-line of photovoltaic efficiencies for various cell types; from the National Renewable Energy Lab)

The choice of thin-film CdTe for the solar cells is once again due to balancing cost and efficiency. First Solar claims that its CdTe modules have the smallest carbon footprint (this includes fabrication and recycling of the module over its lifetime) compared to any photovoltaic on the market, as well as the fastest energy payback time (EPBT). They also note that the high temperature coefficient of CdTe allows their modules to perform better than silicon at higher temperatures, which will obviously be crucial given the heat conditions of the Mojave.

Other summer solar news include McGraw-Hill's June 13, announcement to build the world's largest private solar plant at its East Windsor, New Jersey campus. Though New Jersey is not as sunny as the Mojave Desert, the plant is slated to generate an impressive14 MW.

A detailed solar map was released by the City University of New York on June 16, which shows the solar energy production potential of the New York City's rooftops. The New York Times reported that the solar map, made by making LIDAR sweeps the previous year, shows that two-thirds of New York's rooftops have great potential for solar harvesting. If these rooftops were covered with solar panels, the city could use them to meet half of its electrical power consumption needs, even at peak use.

NYC roofs were not the only ones in the solar lime-light recently. Google announced on June 14, a partnership with SolarCity in which they will provide a $280 million fund to help finance SolarCity's solar panel leasing program for rooftops across the U.S. The largest hurdle for residential solar panels is the up-front cost and installation of panels, typically tens of thousands of dollars. By bankrolling SolarCity's leasing program, Google will put solar panels on the roofs of more U.S. homes. Other U.S. companies with solar leasing programs include Sungevity and SunRun.

In more solar news, IEEE released a statement on June 15, projecting that solar power could become the most economical form of energy generation in the next 10 years, provided a continued increase in efficiency of photovoltaics and development in mass production of of solar cells.

To that aim, a collaboration between Japan’s New Energy and Industrial Technology Development Organization (NEDO) and the European Commission, which began June 1, 2011 will attempt to push PV efficiency to greater than 45% in the next four years. The record is currently just over 40% in concentrator, multi-junction devices made by companies like Sharp, Spectrolabs, Spire and Solar Junction (see efficiency curve above). To meet this goal, the Japanese-EU collaboration led by Antonio Luque of University of Madrid and Masafumi Yamaguchi of Toyota Technological Institute will pursue an approach using multi-junction cells, but will also explore options of adding nanostructures like quantum wells or quantum dots. They will also try to enhance the design of typical solar module concentrator optics.

The potential solar boom will not only be good for our planet but will provide the optics community with new challenges and opportunities in R&D. If investment in solar energy continues at its current pace, we could see a significant shift in the funding and direction of optics research. Whatever the shape or face of the next best efficient or cost-effective solar cell, be rest assured you will hear about it at CLEO.

Wednesday, May 11, 2011

Capitol Hill Day



















(From Left: Laura Kolton (OSA Public Policy Team), Greg Quarles (President of B.E. Meyers Electro Optics), James van Howe (Assistant Professor, Augustana College), Representative Bobby Schilling (IL), Adam Zysk (research associate IIT, Chicago), Hong-Jhang Syu (Research Assistant, National Taiwan University))


On Thursday May 5th, a number of the conference attendees took a bus to Washington D.C. to visit the offices of various members of congress and senators of our respective legislative districts and states. Our goal was to help defend science funding levels in the wake of strong national sentiment to reduce U.S. federal spending.

What we learned the night before in the briefing at the Baltimore Convention Center was fascinating, and the actual day of visiting policy-makers to discuss science-funding issues was exhilarating. In the briefing, we learned that one of most effective ways of influencing a senator or member of congress was through a conversation with a constituent. Visits from lobbyists actually rank much lower on survey data from congressional staffers. What was also fascinating to me was that an email from a constituent ranked just below a visit from a constituent and still far above a visit from a lobbyist. I immediately promised myself to regularly send email to my representatives and you should too! It works!

At the briefing, one of the speakers, Mike Lubell, the Public Affairs Director at the American Physical Society, showed us revealing survey data from focus groups. One group was from a community with many ties to science industry and one had very little. Shockingly, the results were the roughly the same for each:

1. The groups generally loved science and are supportive of science research
2. The groups thought that science should be a national priority
3. Here's the kicker: The the groups were distrusting of the federal government as the funding source for science research. Somehow they want to keep good science without federal funding.

So there is good and bad news. Scientist can expect good moral and emotional support from the public, but maybe not dollars. In fact, Lubell showed a list that had science as the second most chosen category from the groups of where to cut federal funding.

Our task for the Capitol Hill visits was well laid out- try to educate our representatives about the role of federal money in science research; how it is almost the sole source for science funding in the U.S., how science requires sustained funding over time for results, and how our quality of life is enhanced by the technology we develop such as noninvasive biomedical imaging techniques for cancer diagnostics and photovoltaics for green energy production, not to mention the highly skilled workforce good science creates.

I particularly liked Lubell's list of "good" and "bad" words and phrases to bring up or avoid as you are trying to convince non-scientists of the importance of federal funding for science. Words like "basic research" and "fundamental research" did note bode well in the public eye. They took "basic" and "fundamental" to mean "remedial." The phrase, "Investing in America's future," tracks well with Democrats, but not Republicans since "investment" to the GOP means "spending." There was a fairly strong equal distaste among different parties for the idea that America needs to be the top competitor among foreign nations in science. I guess our Cold War attitude as been slipping since Reagan administration. A "good" party-neutral phrase is the cheesy (sorry but it is), "Building a better America." The list goes on and is both entertaining and illuminating as to the perception of science in the U.S.

Because I live in Iowa but work in Illinois, I met a handful of staffers from both states. From the picture above, you can see that my team was able to personally meet congressman Bobby Schilling (R) from Illinois. Congressman Schilling was extremely kind and hospitable, as were the staffers from the other offices we visited: Senator Kirk (R-IL), Senator Grassley (R-IA), Congressman Quigley (D-IL), and Congressman Braley (D-IA). I was very impressed with the professionalism and cordiality of the young staffers who took the time to listen to our concerns.

So I know you are now asking yourself, "What can I do to help?" Begin by emailing your representatives with your concerns for science funding (at the Capitol Hill visits we were advocating sustained levels (not an increase) . Also be sure to visit the OSA public policy homepage for updates on additional organized Capitol Hill visits, pending science-bearing legislation, letters to sign, etc.

For more info and photos on the Capitol Hill day event click here.

Sunday, May 8, 2011

See you in San Jose!

I hope this post greets everyone safe and cozy at home, resting from a week packed of optics innovation. I am still catching my breath. There was just so much. I would have liked to have attended many more talks, visited more booths at the expo, met up with more colleagues, and posted more (I still might on the latter- it turns out, for better or worse, Newton's First Law applies to blogs as well). Harold Metcalf was correct is is pre-CLEO analysis "Looking over the program and the titles of the sessions, I feel like a kid in a candy store- with unlimited funds, but limited time. It's impossible to do everything."

However, I intend to update this ClEO blog for a little longer with posts that I couldn't squeeze in during the week. I am going to try to have my "candy" and eat it too, though hopefully without a stomachache (figuratively or literally) for blogger or reader.

Regardless, mark your calendars for May 6-11, in 2012 for next year's meeting in San Jose!

Saturday, May 7, 2011

Time-Lens 2.0

Brian Kolner and Moshe Nazarathy coined the word "time-lens" in 1989 after using one to compress a pulse. They made a system in the time-domain that was a complete analog to a lens system in space. Their time-lens took a fat pulse and "focused" it, just like a spatial lens could take a fat beam and focus it to a smaller size. For more details, see Kolner's well-written 1994 review on space-time duality and van Howe and Xu's 2006 review on temporal-imaging devices).

Because much of my thesis work focused (pun intended) on temporal-imaging devices, I can't help seeing them everywhere. This year's CLEO conference was no exception with some talks being more direct about it than others.

Takahide Sakamoto from the National Institute of Information and Communication, in Tokoyo, Japan discussed time-lenses without using the word itself in tutorial, CMBB1, in "Optical Comb and Pulse Generation from CW Light." Sakamoto showed impressive work on comb synthesis from CW light using electro-optic (EO) modulation. He demonstrated that EO phase modulation provides the most efficient way to move from CW light to the picosecond bandwidth regime. Higher order nonlinearities like chi-3 from fiber (EO is chi-2 process) can then be used to move bandwidth to femtosecond regime. Sakamoto stressed a clever biasing and driving technique using an itensity modulator that allowed truly flat comb spectra.

Other work leveraging temporal imaging concepts were CMD1, "Tunable high-energy soliton pulse generation from a large-mode-area fiber pumped by a picosecond time-lens source," from Chris Xu's group at Cornell University and JTuI77, "Scalable 1.28-Tb/s Transmultiplexer Using a Time Lens" by Petrillo and Foster. The former used electro-optic modulation as the time-lens to generate a seed source from CW light for solition shifting. The latter used four-wave mixing as the time-lens mechanism in order to look at the Fourier transform of a data packet for high-speed time-division multiplexing to wavelength-division multiplexing conversion (just as a spatial lens can provide a Fourier transform of a spatial profile, a time-lens can give the power spectrum of a temporal profile). Note that the Xu group has also developed time-lens source for CARS microscopy.

Work from Andrew Weiner's group also made use of time-lenes, CWN3, "Broadband, Spectrally Flat Frequency Combs and Short Pulse Sources from Phase modulated CW: Bandwidth Scaling and Flatness Enhancement using Cascaded FWM" and CFG6, "Microwave Photonic Filters with > 65-dB Sidelobe Suppression Using Directly Generated Broadband, Quasi–Gaussian Shaped Optical Frequency Combs." These works used a front end similar to those shown by Sakamoto, but then added an assisted nonlinear enhancement to bandwidth by using four-wave mixing.

Finally, former CLEO Blogger, Kesnia Dolgaleva, authored CThHH6, "Integrated Temporal Fourier Transformer Based on Chirped Bragg Grating Waveguides" to show a compact, integrated Fourier Transformer, which though not a time-lens, is another device similarly based on space-time duality. This paper draws upon co-author Jose Azana's previous fiber Bragg grating work, which is just one of many Azana's contributions to the field of temporal imaging.

If you look hard enough, you can see time-lenses anywhere- all you need is a device that gives a quadratic phase in time to an optical wavefront (nonlinear frequency mixing, used everywhere in optics, is one technique that works well). However, the big advantage for recognizing a time-lens when you have one is that you can bring all of the knowledge of spatial imaging systems to your work with a simple change of variables.

Thursday, May 5, 2011

The Real-Life Tony Stark












(Robert Downey Jr. plays Tony Stark, defense contractor, billionaire playboy, scientific genius, and alter ego
Iron Man. Image from www.comicbookmovie.com, still from Iron Man 2)

On Tuesday, May 3, I sat in on part of the Market Focus talks at the CLEO expo on defense. The Market Focus sessions cover various business and commercial applications of optics research. Last year was the first time I attended a Market Focus session, and I knew I had to go back. It is a little expo in itself that requires no walking- you to sit down and find out trends and problems that need solving in particular commercial areas. Great fodder for new research ideas!

Although the sessions are broken up into specific talks, what makes these sessions unique is that they turn into a round-table discussion at the end (and even during) the session. They have a more intimate and informal feel than the technical talks and have been organized with a specific agenda of bringing the attendee to a common understanding of the particular market being addressed.

For example, in the defense session, moderator John Koroshetz from Northrop Gruman Laser Systems laid out the logical order of talks to help us get our foot in the door of defense contracting: 1) Science and Technology Development, 2) Product Development, 3) Manufacturing, and 4) The Soldier's perspective. The aim was to help a novice understand the cycle of product development, funding, testing, manufacturing, and end-use, which often cycles back to development for upgrading and enhancing the product into its next-generation phase.

The first speaker, Craig Hoffman, from the Naval Research Laboratory, described science and technology development of infrared imaging systems. He broke up NRLs work in this area by spectral region:

-Visible: 0.4-0.7 microns (high photon energy makes devices tolerant to noise, but scattering makes it bad for imaging through dust or fog)

-Near IR: 0.7-3.0 microns (better for imaging through climate, but resolution gets worse because the wavelength is getting longer; becoming less tolerant to noise)

-Mid-IR: 3.0-5.0 microns (getting very good at seeing through climate, but getting even worse with resolution and noise tolerance; detectors may need to be cryogenically cooled to circumvent thermal noise)

-Longwave: 5.0-14.0 microns (least prone to scattering, worst for resolution and noise)

NRL is looking to piggy back imaging systems in these regions for applications in target acquisition, surveillance, and reconnaissance. The shorter wavelength systems use reflected light to gain information about detail of a target whereas the longer wavelength systems make use of emissive properties of a target to gain bulk properties like thermal imaging.

For example, new thermal imaging systems use mid-infrared light detection to gain detailed information about a target, but also use longwave detection in order to gain a wider field of view. You need both since by themselves the former sacrifices field for resolution and the latter sacrifices resolution for field.

Hoffman went on to describe military imaging problems that need better answers 1) Detailed target identification. You don't want to just know if a target is a tractor or a tank, but exactly whose tank it is (friend or foe?) and with enough time to either make evasive maneuvers or decide how to engage. 2) Fast data acquisition for reconnaissance. For this application, you want to collect data from an aircraft that is flying high and fast. You don't have the burden of real-time analysis like target acquisition (you can spend weeks later to analyze data), but you do need to collect enough information, with enough quality during the short acquisition time. 3) Surveilling a small area for weeks on end to look at changes in patterns like traffic flow, building construction, etc.

Hoffman spoke briefly about things like SWaP- size, weight, and power. This acronym represents all the things that should be as small as possible for a viable military product. Pete Vallianos from N2 Imaging systems followed up Hoffman's talk with more of the parameters, tests, and requirements related to SWaP. Vallianos underscored the importance of practicality and robust requirements when it comes to making products for the military. He repeatedly reminded the attendees that the military is not interested in your research per se (definitely not technology for its own sake), but rather interested in how technology might solve problems. While being developed, it needs to go through a variety of rigorous tests- one of the stress tests from the Marines is dropping your product from a height of six feet onto a piece of plywood . If it doesn't survive, it's back to the drawing board.

Vallianos described some specific product development interests of the military in imaging:

-microbolometers
-small eye safe lasers
-CMOS, low level light detection
-lightweight visible optics
-high transmission in optics across the visible spectrum through the longwave IR
-moldable aspheric lenses
-robust broadband optical coatings
-OLEDs
-LCDs
-Lightweight optical "network" on a soldiers back
-Any decrease in power for powered optics to get grid of as many of the batteries as possible a soldier needs to carry in his or her pack.

Though the speakers did not describe any iron suits with flying capability, or magic cold-fusion-like power supplies, I think Tony Stark still would have been proud of this session.

Post-deadline Prep

Just wanted to chime in with a nagging note to remind you to plan your post-deadline itinerary before 8 pm. I am going to commit to PDPA-Session I and not try to hop around the standing-room only crowd. I am particularly interested in the supercontinuum generation and frequency-comb work in this session, some of which is pushing into the mid-ir where there are interesting chemicals to identify for spectroscopy and stand-off detection. Other broadband generation in this session has been performed with small waveguides or micro-resonantors- little pocket combs on silicon (see the April 20, post for more details). I will be disappointed to miss the new Applications and Technology Session, particularly the biomedical work. These groups by far always have the coolest pictures, images, and videos. If you can make it to these talks, the first four of PDPB-Session II, be prepared to be blown away by beautiful videos and images that address improving image acquisition rate, penetration depth, and resolution (again, see the April 20, post for more details). Very likely, these state-of-the-art techniques may be used on you in your lifetime. You can tell your doctor, "I saw this work at CLEO 2011 before you even graduated from med school."

Wednesday, May 4, 2011

The Romance of Photonic Lattices and Ham

(Left: beam profile in one of Mordechai Segev's 2D photonic crystal lattices. The transverse disorder increases from c) to e). From Schwartz, Segev et al)

This year's CLEO plenary sessions were exceptional. Monday evening hosted talks by Donald Keck, who pioneered the first low-loss optical fiber and James Fujimoto, renown for developing optical coherence tomography. Wednesday morning's plenary followed with exhilarating work (I'm serious, not just blogger hyperbole here) on photonic crystals. Even the awards were exciting. Amnon Yariv, responsible for the creation of the distributed feedback laser, and whose book "Optical Electronics in Modern Communication," I safeguard as one of the most helpful optics texts on my shelf, was presented with the 2011 IEEE Photonics Award. In his acceptance speech, he spoke briefly of his emigration to the United States from Israel 60 years ago. The freighter that carried him, other passengers, and iron ore across the Atlantic, made entry in none other than the city of Baltimore. Yariv, reminisced about his first meal after landing in a gritty, industrial, 1950s Baltimore-a ham sandwich (his first ever)!

After the awards, the plenary speakers Mordechai (Moti) Segev and Susumu Noda spoke about their respective work on photonic crystals. Segev, a charismatic speaker, setup a beautiful story addressing a fundamental understanding of periodic and random structures via photonic lattices. He specifically spoke about work on Anderson Localization of Photons, the optical analog to Anderson's Theory for Localization of electron's in a crystal lattice. By introducing disorder into a 2D photonic lattice, Segev was able to constructively interfere light over a small area, and destructively interfere light everywhere else. Diffraction is thwarted, analogous to how diffusion is thwarted by the interference of electron waves for Anderson Localization in a crystal lattice (see figure above). Check out Frank Kuo's February 26, blogpost with more details describing this work.

Segev's group of course has pushed this work further, and into stranger directions. In contrast to confinement, Segev's group found that they could make a beam expand faster than diffraction, hyper-transport. One reason this work is so beautiful is that the theory and phenomena for photonic lattices can be borrowed from crystal lattices in condensed matter and visa-versa. The equations are the same, you just need to change the variables and some good creativity. In 2008, Roati et al leveraged work from Anderson Localization in photonic lattices to demonstrate Anderson Localization for the first time using matter waves. This makes me wonder about a designing a crystal structure with say hyper-diffusion? With a sharp mind and good imagination, the possibilities seem endless.

Segev does a fantastic job framing his work romantically. Though the devices his group makes have great practical implications, it is all through the guise of exploring the nature of world. Segev helps remind us why we became interested in science in the first place, for the thrill of exploration and finding answers, to generate new questions, and to pursue things because they are beautiful.

Segev was a tough act to follow, but Noda came through. After Segev's nice setup, Noda showed one impressive photonic crystal device after the other. Here is a list of some of the groundbreaking devices he has made:

-Nano-cavities with Q > 40,000
-Inhibition of spontaneous emission
-Light that can make right angle turns
-Slowing or stopping light with probe pulses
-Novel gates for quantum computing
-Small beam-steering devices
-High-efficiency, high power single wavelength emitters
-Creation of unique beam patterns for applications like optical trapping of non-dielectric particles
-Sub-wavelength focusing of beams
-Thermal emission control (shaping and redistribution of blackbody spectra)

Again, for details on the physics behind some of these devices, see Frank Kuo's February 26, blogpost. I left this plenary session inspired, ready to get into the lab to get some work done, and strangely with the craving for a Baltimore ham sandwich.

Tuesday, May 3, 2011

Lasers can Mold and Manipulate Metal as easy as Play-doh

(Left: Dr. Marshall Jones from GE Global Research; Photo from GE)

The head of the student machine shop at Cornell, Bob Snedeker ( Sned), liked to remind us in a sarcastic fashion that its easier to take material away from a workpiece than to put it back on- warning: be careful about how much you take off as you cut. Or as the old saying goes in carpentry, "measure twice, cut once." This is not necessarily true for laser machining of metals. Laser cladding, which was one of the topics discussed in the tutorial, AMB1 "Industrial Applications of Laser Materials Processing," by Dr. Marshall Jones from GE Global Research, is a technique in which material can be added to a workpiece where too much was accidentally cut off. Like Play-doh, you can just put back on what you need. Wow, if only I could have laser-cladded my tool bits, and special nut and bolt we were required to make in order to graduate from machine-shop training! Sned had high standards and we spent many hours to make a piece to find out we needed to start over with fresh stock. It was back to the grindstone (literally!) until those bits had a perfect angle and facet.

GE uses laser cladding to clean up mistakes that may have been made for particularly expensive pieces such as airfoils for aviation. You don't want to throw these out and start over. Laser cladding is also used for coat metals with another protective metal surface- hardfacing.

Another laser processing technique explained by Marshall was laser-shock peening. Peening (as in a ball peen hammer-a remnant tool from days of blacksmithing) is a technique that reduces the fatigue of a metal (like preventing cracks from spreading) by applying a compression force to the surface. In the old days, this was done with a hammer, Marshall uses a "laser hammer." To create a shock wave powerful enough to peen, you need a laser beam with an a power density of 1010 W/cm2 and an interaction time with the surface of no more than 10 ns. Using an interface like water, through which the compression force propagates to reach the metal surface, can make peening more effective. GE also uses shock peening for aviation pieces in order to extend the life of a particular part.

Marshall also discussed a handful of additional applications for laser welding at GE. GE rail uses laser welding for their diesel engine heads and liners. Their consumer division uses it to weld electrodes of ceramic metal halide lamps- the ones that give a nice white-light spectrum crucial for lighting for retail. Marshall brought up the fact that jewelers and clothing retailers particularly need white-light illumination in their stores to ensure customer satisfaction (you don't want what you thought was a red dress to turn out to really be magenta when you leave a shop and get out into the sunlight). The ceramic metal hallide lamp electrodes are particularly tricky to weld together because of the need to join odd materials W:Mo welds and Mo:Nb welds. GE uses a special three-beam laser system to join these materials.

Finally, Marshall briefly discussed the laser systems themselves used for such applications. The conventional lasers used for processing are CO2 lasers and Nd:YAG systems that give out approximately 20 kW average power. Unfortunately they have 10% and 3% wall-power efficiency respectively, and CO2 lasers require expensive specialty fiber for coupling due to the long emission wavelength. Ytterbium-doped fiber lasers and ampliers are beginning to replace these current workhorses due to high wall-power efficiency, 30%, and all-fiber configurations (zero optical alignment and high flexibility in footprint and beam delivery). The main disadvantage to high-power fiber lasers is expense. However, as fiber-systems continue to be developed, they may very well replace their bulk system competitors in the near-future.

Monday, May 2, 2011

Small, Mountain-Town Mecca for Optics Research














(Above: Big Sky Laser series compact Q-Switched, Nd:YAG from Quantel Laser. Author's note: may not be best to combine beautiful Montana stream with Nd:YAG)


I am likely showing my naivete as a "young" optics researcher, but after my Super Shuttle ride from the airport to my hotel last night, I felt compelled to say something about Bozeman, Montana. And no, I'm not being paid by Bozeman's Chamber of Commerce (though if you are paying attention Bozeman, an all-expenses-paid visit to Bozeman could convince me to write more about your town whose combination of optics innovation and gorgeous backdrop is causing me to want to pack up my bags and head out West to Big Sky Country.)

What peaked my interest about Bozeman were two passengers in my Blue Van who were employed by Bozeman optics companies. I knew ILX Lightwave was out of Bozeman, but didn't realize that was just the beginning. One of the passengers was an HR rep from Quantel. Note to job seekers: Quantel-Medical, which makes laser systems for ophthamology and dermatology, is looking to fill several positions. Find out more information at the CLEO Job Fair.

My surface internet searching led me to make a stab at a list of photonics companies in Bozeman (a booth number adjacent lets you know that they will be at the expo which opens at 9:45 am tomorrow):

AdvR (booth 1330)
Altos Photonics (booth 1225)
Bridger Photonics
ILX Lightwave (booth 1808)
Lattice Materials
Resonon
S2
Scientific Materials Corp
Quantel (booth 1903)
Quantum Composers

This list, which is by no means complete, is still very impressive given that the population of Bozeman is just under 40 thousand.

So why all the optics in Bozeman? A 2005 article from the Bozeman Daily Chronicle gives credit to Montana State University professors Pat Callis and Rufus Cone for strengthening the optics program in the late 1980's which led the establishment of a handful of companies in 1990, such as ILX Lightwave, Big Sky Laser (now Quantel), and Lattice Materials. To further the growth of optics at MSU and colloboration between industry, OpTec was created in 1995 as a multidisciplinary center for optics research. In 1999, Spectrum Lab was formed to specifically transition photonics research from MSU to Montana companies. Bozeman companies have also been no strangers to Small Business Innovation Research (SBIR) grants to bolster the development of optics start-ups.

Be sure to stop by the Bozeman contingent at the expo, if not to talk optics and photonics, at least to hear adventures of fly-fishing, downhill skiing, rugged hiking, and glacier climbing!