Digging Into the New QD-OLED TVs - IEEE Spectrum

2022-10-08 13:21:56 By : Mr. Jerry Yin

The October 2022 issue of IEEE Spectrum is here!

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

Formerly rival technologies have come together in Samsung displays

Sony's A95K televisions incorporate Samsung's new QD-OLED display technology.

All these products use display panels manufactured by Samsung but have their own unique display assembly, operating system, and electronics.

I took apart a 55-inch Samsung S95B to learn just how these new displays are put together (destroying it in the process). I found an extremely thin OLED backplane that generates blue light with an equally thin QD color-converting structure that completes the optical stack. I used a UV light source, a microscope, and a spectrometer to learn a lot about how these displays work.

Samsung used a unique pixel pattern in its new QD-OLED displays. Peter Palomaki A few surprises:The pixel layout is unique. Instead of being evenly arrayed, the green quantum dots form their own line, separate from the blue and red [see photo, above]. (The blue pixels draw their light directly from the OLED panel, the red and green pixels are lit by quantum dots.)The bandwidth of the native QD emission is so narrow (resulting in a very wide color gamut, that is, the range of colors that can be produced, generally a good thing) that some content doesn’t know how to handle it. So the TV “compresses” the gamut in some cases by adding off-primary colors to bring its primary color points in line with more common gamuts. This is especially dramatic with green, where “pure” green actually contains a significant amount of added red and a small amount of added blue.While taking this thing apart was no easy task, and deconstruction cracked the screen, I was surprised at how easily the QD frontplane and the OLED backplane could be separated. It was easier than splitting an Oreo in half. [See video, below.] As for the name of this technology, Samsung has used the branding OLED, QD Display, and QD-OLED, while Sony is just using OLED. Alienware uses QD-OLED to describe the new tech (as do most in the display industry). —Peter PalomakiStory from January 2022 follows: For more than a decade now, OLED (organic light-emitting diode) displays have set the bar for screen quality, albeit at a price. That’s because they produce deep blacks, offer wide viewing angles, and have a broad color range. Meanwhile, QD (quantum dot) technologies have done a lot to improve the color purity and brightness of the more wallet-friendly LCD TVs. In 2022, these two rival technologies will merge. The name of the resulting hybrid is still evolving, but QD-OLED seems to make sense, so I’ll use it here, although Samsung has begun to call its version of the technology QD Display. To understand why this combination is so appealing, you have to know the basic principles behind each of these approaches to displaying a moving image. In an LCD TV, the LED backlight, or at least a big section of it, is on all at once. The picture is created by filtering this light at the many individual pixels. Unfortunately, that filtering process isn’t perfect, and in areas that should appear black some light gets through. In OLED displays, the red, green, and blue diodes that comprise each pixel emit light and are turned on only when they are needed. So black pixels appear truly black, while bright pixels can be run at full power, allowing unsurpassed levels of contrast. But there’s a drawback. The colored diodes in an OLED TV degrade over time, causing what’s called “burn-in.” And with these changes happening at different rates for the red, green, and blue diodes, the degradation affects the overall ability of a display to reproduce colors accurately as it ages and also causes “ghost” images to appear where static content is frequently displayed. Adding QDs into the mix shifts this equation. Quantum dots—nanoparticles of semiconductor material—absorb photons and then use that energy to emit light of a different wavelength. In a QD-OLED display, all the diodes emit blue light. To get red and green, the appropriate diodes are covered with red or green QDs. The result is a paper-thin display with a broad range of colors that remain accurate over time. These screens also have excellent black levels, wide viewing angles, and improved power efficiency over both OLED and LCD displays. Samsung is the driving force behind the technology, having sunk billions into retrofitting an LCD fab in Tangjeong, South Korea, for making QD-OLED displays While other companies have published articles and demonstrated similar approaches, only Samsung has committed to manufacturing these displays, which makes sense because it holds all of the required technology in house. Having both the OLED fab and QD expertise under one roof gives Samsung a big leg up on other QD-display manufacturers., Samsung first announced QD-OLED plans in 2019, then pushed out the release date a few times. It now seems likely that we will see public demos in early 2022 followed by commercial products later in the year, once the company has geared up for high-volume production. At this point, Samsung can produce a maximum of 30,000 QD-OLED panels a month; these will be used in its own products. In the grand scheme of things, that’s not that much. Unfortunately, as with any new display technology, there are challenges associated with development and commercialization. For one, patterning the quantum-dot layers and protecting them is complicated. Unlike QD-enabled LCD displays (commonly referred to as QLED) where red and green QDs are dispersed uniformly in a polymer film, QD-OLED requires the QD layers to be patterned and aligned with the OLEDs behind them. And that’s tricky to do. Samsung is expected to employ inkjet printing, an approach that reduces the waste of QD material. Another issue is the leakage of blue light through the red and green QD layers. Leakage of only a few percent would have a significant effect on the viewing experience, resulting in washed-out colors. If the red and green QD layers don’t do a good job absorbing all of the blue light impinging on them, an additional blue-blocking layer would be required on top, adding to the cost and complexity. Another challenge is that blue OLEDs degrade faster than red or green ones do. With all three colors relying on blue OLEDs in a QD-OLED design, this degradation isn’t expected to cause as severe color shifts as with traditional OLED displays, but it does decrease brightness over the life of the display. Today, OLED TVs are typically the most expensive option on retail shelves. And while the process for making QD-OLED simplifies the OLED layer somewhat (because you need only blue diodes), it does not make the display any less expensive. In fact, due to the large number of quantum dots used, the patterning steps, and the special filtering required, QD-OLED displays are likely to be more expensive than traditional OLED ones—and way more expensive than LCD TVs with quantum-dot color purification. Early adopters may pay about US $5,000 for the first QD-OLED displays when they begin selling later this year. Those buyers will no doubt complain about the prices—while enjoying a viewing experience far better than anything they’ve had before.

Samsung used a unique pixel pattern in its new QD-OLED displays.

As for the name of this technology, Samsung has used the branding OLED, QD Display, and QD-OLED, while Sony is just using OLED. Alienware uses QD-OLED to describe the new tech (as do most in the display industry).

For more than a decade now, OLED (organic light-emitting diode) displays have set the bar for screen quality, albeit at a price. That’s because they produce deep blacks, offer wide viewing angles, and have a broad color range. Meanwhile, QD (quantum dot) technologies have done a lot to improve the color purity and brightness of the more wallet-friendly LCD TVs.

In 2022, these two rival technologies will merge. The name of the resulting hybrid is still evolving, but QD-OLED seems to make sense, so I’ll use it here, although Samsung has begun to call its version of the technology QD Display.

To understand why this combination is so appealing, you have to know the basic principles behind each of these approaches to displaying a moving image.

In an LCD TV, the LED backlight, or at least a big section of it, is on all at once. The picture is created by filtering this light at the many individual pixels. Unfortunately, that filtering process isn’t perfect, and in areas that should appear black some light gets through.

In OLED displays, the red, green, and blue diodes that comprise each pixel emit light and are turned on only when they are needed. So black pixels appear truly black, while bright pixels can be run at full power, allowing unsurpassed levels of contrast.

But there’s a drawback. The colored diodes in an OLED TV degrade over time, causing what’s called “burn-in.” And with these changes happening at different rates for the red, green, and blue diodes, the degradation affects the overall ability of a display to reproduce colors accurately as it ages and also causes “ghost” images to appear where static content is frequently displayed.

Adding QDs into the mix shifts this equation. Quantum dots—nanoparticles of semiconductor material—absorb photons and then use that energy to emit light of a different wavelength. In a QD-OLED display, all the diodes emit blue light. To get red and green, the appropriate diodes are covered with red or green QDs. The result is a paper-thin display with a broad range of colors that remain accurate over time. These screens also have excellent black levels, wide viewing angles, and improved power efficiency over both OLED and LCD displays.

Samsung is the driving force behind the technology, having sunk billions into retrofitting an LCD fab in Tangjeong, South Korea, for making QD-OLED displays While other companies have published articles and demonstrated similar approaches, only

Samsung has committed to manufacturing these displays, which makes sense because it holds all of the required technology in house. Having both the OLED fab and QD expertise under one roof gives Samsung a big leg up on other QD-display manufacturers.,

Samsung first announced QD-OLED plans in 2019, then pushed out the release date a few times. It now seems likely that we will see public demos in early 2022 followed by commercial products later in the year, once the company has geared up for high-volume production. At this point, Samsung can produce a maximum of 30,000 QD-OLED panels a month; these will be used in its own products. In the grand scheme of things, that’s not that much.

Unfortunately, as with any new display technology, there are challenges associated with development and commercialization.

For one, patterning the quantum-dot layers and protecting them is complicated. Unlike QD-enabled LCD displays (commonly referred to as QLED) where red and green QDs are dispersed uniformly in a polymer film, QD-OLED requires the QD layers to be patterned and aligned with the OLEDs behind them. And that’s tricky to do. Samsung is expected to employ inkjet printing, an approach that reduces the waste of QD material.

Another issue is the leakage of blue light through the red and green QD layers. Leakage of only a few percent would have a significant effect on the viewing experience, resulting in washed-out colors. If the red and green QD layers don’t do a good job absorbing all of the blue light impinging on them, an additional blue-blocking layer would be required on top, adding to the cost and complexity.

Another challenge is that blue OLEDs degrade faster than red or green ones do. With all three colors relying on blue OLEDs in a QD-OLED design, this degradation isn’t expected to cause as severe color shifts as with traditional OLED displays, but it does decrease brightness over the life of the display.

Today, OLED TVs are typically the most expensive option on retail shelves. And while the process for making QD-OLED simplifies the OLED layer somewhat (because you need only blue diodes), it does not make the display any less expensive. In fact, due to the large number of quantum dots used, the patterning steps, and the special filtering required, QD-OLED displays are likely to be more expensive than traditional OLED ones—and way more expensive than LCD TVs with quantum-dot color purification. Early adopters may pay about US $5,000 for the first QD-OLED displays when they begin selling later this year. Those buyers will no doubt complain about the prices—while enjoying a viewing experience far better than anything they’ve had before.

Peter Palomaki is the owner and chief scientist of Palomaki Consulting, where he helps companies understand and implement quantum dot technology.

The future heart of the Vera C. Rubin Observatory will soon make its way to Chile

The LSST camera, eventually bound for the Vera C. Rubin Observatory in Chile, sits on its stand in a Bay Area clean room.

The world’s largest camera sits within a nondescript industrial building in the hills above San Francisco Bay.

If all goes well, this camera will one day fit into the heart of the future Vera C. Rubin Observatory in Chile. For the last seven years, engineers have been crafting the camera in a clean room at the SLAC National Accelerator Laboratory in Menlo Park, Calif. In May 2023, if all goes according to plan, the camera will finally fly to its destination, itself currently under construction in the desert highlands of northern Chile.

Building a camera as complex as this requires a good deal of patience, testing, and careful engineering. The road to that flight has been long, and there’s still some way to go before the end is in sight.

“We’re at the stage where we’ve got all the camera’s mechanisms fully assembled,” says Hannah Pollek, a staff engineer at SLAC.

Any typical camera needs a lens, and this camera is certainly no exception. At 1.57 meters (5 feet) across, this lens is the world’s largest, as recognized by the Guinness Book of World Records. When it’s installed, it will catch light reflected through a triplet of mirrors, built separately.

In action, the telescope will point at a parcel of sky, 3.5 degrees across—in other words, seven times the width of the full moon. The camera will take two exposures, back-to-back, approximately 15 seconds each—bracketed by the sweeping of a colossal shutter. Then, the telescope will move along to the next parcel, and so forth, in a mission to survey the southern sky for years on end.

Behind the lens sit the detectors, which are fashioned from charge-coupled device (CCD) sensors, common in astronomy. With the lens cap removed, the detectors are visible as a silver-and-blue grid, the different colors being a consequence of the camera having two different suppliers. Together, they can construct images that are as large as 3.2 gigapixels.

The camera’s detectors, the silver and blue squares, are seen through its uncapped 1.5-meter-wide lens.Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory

To do that repeatedly, those detectors need to be chilled. That’s why there’s a large bundle of tubing behind the camera. Some of it is for data or power, but most is plumbing for the refrigeration. They help a cryostat cool the detectors to around -100 °C. Those temperatures eliminate much of the noise that the CCDs might otherwise pick up.

The detectors aren’t the only part of the camera that need to be kept on ice. The camera’s back-end electronics generate some 1,100 watts of heat, and cold liquid is pumped through as a counter. This doesn’t need a cryostat, but it has given the camera’s engineers headaches. Recently, they’ve had to swap out the fluid they use, necessitating a complete rework of the plumbing. The engineers are still tinkering with the new system.

There are a few components that remain to be installed. For astronomers, key to the camera’s operation are the filters that will fit over the lens. There are six of them, each coated to only let through specific wavelengths of light (such as ultraviolet or near-infrared). Built in Massachusetts and Provence, France, and shipped to California, they now sit on the floor of the camera’s clean room.

When they’re installed, five of them will sit in a carousel about the camera—the sixth resting in storage, waiting for its turn to be swapped in. In practice, the mechanism takes about 2 minutes to slot a filter in between the lens and the detectors. The filters are delicate glass, so engineers have been testing the system with dummy metal disks of the same weight.

After the filters are installed, along with a few final body panels, engineers will swing the camera down to point it at the floor. They’ll test its performance in a darkened environment by shuffling around light sources.

If building and testing the camera is one saga, then actually getting it to its final destination is an entirely different ordeal.

English-language technical instructions have to be rewritten in Spanish for the benefit of local Chilean technicians. The lenses and other glass parts will have to be removed. The camera will have to be mounted within a shipping container and clamped inside special frames, specifically designed to isolate vibrations and keep the camera stable in forces up to 2 gs.

Even that system has been tested in a mockup of its special flight—a chartered Boeing 747 cargo plane from San Francisco to Santiago, a direct flight that typically doesn’t exist.

“We really want to avoid extra trucking in the .US.,” says Margaux Lopez, a staff engineer at SLAC. “It just makes more sense to put our camera on a chartered plane with all of the rest of the stuff in the clean room. We have an incredible amount of support equipment that also needs to go down.”

If all goes well with the last phase of construction, this camera will soon depart California for Chile and catch its first glimpse of the night sky by 2024.

New nonprofit Basis hopes to model human reasoning to inform science and public policy

Matthew Hutson is a freelance writer who covers science and technology, with specialties in psychology and AI. He’s written for Science, Nature, Wired, The Atlantic, The New Yorker, and The Wall Street Journal. He’s a former editor at Psychology Today and is the author of The 7 Laws of Magical Thinking. Follow him on Twitter at @SilverJacket.

The field of artificial intelligence has embraced deep learning—in which algorithms find patterns in big data sets—after moving on from earlier systems that more explicitly modeled human reasoning. But deep learning has its flaws: AI models often show a lack of common sense, for example. A new nonprofit, Basis, hopes to build software tools that advance the earlier method of modeling human reasoning, and then apply that method toward pressing problems in scientific discovery and public policy.

To date, Basis has received a government grant and a donation of a few million dollars. Advisors include Rui Costa, a neuroscientist who heads the Allen Institute in Seattle, and Anthony Philippakis, the chief data officer of the Broad Institute in Cambridge, Mass. In July, over tacos at the International Conference on Machine Intelligence, I spoke with Zenna Tavares, a Basis cofounder, and Sam Witty, a Basis research scientist, about human intelligence, problems with academia, and trash collection. The following transcript has been edited for brevity and clarity.

How did Basis get started?

Zenna Tavares: I graduated from MIT in early 2020, just before the pandemic. My research had been around probabilistic inference and causal reasoning. I made pretty complicated simulation models. For example, if you’re driving a car, and you crash, would you have crashed had you been driving slower? I built some tools for automating that kind of reasoning. But it’s hard work to do in a conventional academic environment. It requires more than one graduate student working on it at a time. So how can we build an organization focused on this somewhat non-mainstream approach to AI research? Also, being a little bit burned out by my Ph.D., I was thinking it would be great if we could apply this to the real world.

What makes your approach non-mainstream?

Tavares: The mainstream right now in AI research is deep machine learning, where you get a lot of data and train a big model to try to learn patterns. Whether it be GPT-3, or DALL-E, a lot of these models are based on trying to emulate human performance by matching human data. Our approach is different in that we’re trying to understand some basic principles of reasoning. Humans build mental models of the world, and we use those models to make inferences about how the world works. And by inferences, I mean predictions into the future, or counterfactuals—how would the world have been had things been different? We work a lot with representations, like simulation-based models, that allow you to express very complicated things. Can we build really sophisticated models, both for commonsense reasoning but also for science?

Sam Witty: The application areas that we’re particularly interested in, and I think have been underserved by a lot of the existing machine-learning literature, rely on a lot of human knowledge. And often, scientists have a lot of knowledge that they could bring to bear on a problem. One main technical theme of our work is going to be about hybridizing, getting the best of classical approaches to AI based on reasoning, and modern machine-learning techniques, where scientists and policymakers can communicate partial knowledge about the world and then fill in the gaps with machine learning.

Why aren’t causal methods used more often?

Tavares: On the one hand, it’s just a really hard technical problem. And then two, a lot of advances in deep learning come because large companies have invested in that particular technology. You can now just download a software package and build a neural network.

Witty: I think a part of it is the kinds of problems we’re trying to solve. Think of the application areas that large tech companies have focused on. They benefit from vast amounts of data and don’t rely on human knowledge as much. You can just gather millions and millions of images and train a computer-vision model. It’s not as obvious how to do that with scientific discovery or policymaking.

You’re applying machine learning to policymaking?

Tavares: That’s an area we’re pursuing. How do you model a city? We’re starting to talk to agencies in New York City. How can we improve the trash problem? How can we reduce homelessness? If we instantiate this policy, what’s going to happen? And the inverse problem: If we want to reduce trash and reduce homelessness, what policies should we instantiate? How should we allocate resources? Could we build multiscale models, which capture different components of the city, in some coherent and cohesive way? And also make it accessible so you can actually help policymakers answer some concrete questions?

Will you be working with the city to answer specific questions about trash pickup, or developing new tools that anyone can use to work on these kinds of problems?

Tavares: We’re starting with particular questions, but to answer those we will require a more general set of capabilities. Can we build a model of a few blocks of New York that are at a level of scale that’s not been done before? That model could then be used to ask a variety of different questions. But just to make sure we’re grounded, we do want to have a particular set of questions.

Witty: One thing that’s especially important is that we want to involve experts and stakeholders, to encode their knowledge, their preferences, their goals.

Tavares: Which is itself quite a hard problem. There’s no massive data set of people’s commonsense knowledge about the urban environment. We’re excited because I think there is a real opportunity to do these two things in tandem—build this foundation of inference but also have an effect immediately.

Witty: Yeah, we’re certainly looking to communicate with the research world. And organizationally, we’re planning on having people work with Basis who are not Basis staff, and often they will be academic researchers with incentives to publish and further their academic careers. One thing I will say is that personally, during my Ph.D., I would often scope projects with the paper as the end goal, and I’m planning on shifting that mind-set to focusing on the work and then afterwards using a paper as a means of communication. But yeah, we don’t want to be hermits in the woods for 20 years, and then come out with this big technology that’s now outdated and totally disconnected from the rest of the world.

Tavares: We are open-source-software-focused, as opposed to the primary output being papers. And within the software focus, we want a unified body of software. We’re trying to build a platform, as opposed to a bunch of different projects.

Could you say more about the organization benefits of having a nonprofit?

Tavares: As a student, your goal is to publish papers and graduate. And that’s only weakly aligned with doing impactful research. We’re working as a team, and our goals are aligned with what we want to do. We’re not unique in that. Look at the papers coming out of DeepMind. They have like 30 authors. I think academia is great for many things, including exploring new ideas. But it is harder, at least in my experience, to build robust technology. It’s not rewarded.

Witty: That’s nonprofit versus academia. From the other side, certainly large tech companies can collaborate in large teams and develop shared infrastructure. But there, there are incentives that maybe get in the way of the work that we want to do as well. The fact that we’re not beholden to make a profit is really freeing.

Will products or services bring income in addition to grants and donations?

Tavares: Hopefully, if we’re successful building what we plan to build, there will be many different domains in which we could. It’s a little bit of a weird new organization. Many things are not certain, and I don’t want to convey things more set in stone or figured out than they are.

Learn how to measure and reduce common mode electromagnetic interference (EMI) in electric drive installations

Nowadays, electric machines are often driven by power electronic converters. Even though the use of converters brings with it a variety of advantages, common mode (CM) signals are a frequent problem in many installations. Common mode voltages induced by the converter drive common mode currents damage the motor bearings over time and significantly reduce the lifetime of the drive.

Download this free whitepaper now!

Hence, it’s essential to measure these common mode quantities in order to take suitable countermeasures. Handheld oscilloscopes in combination with Rogowski probes offer a simple and reliable way to accurately determine the required quantities and the effectiveness of different countermeasures.