Quantcast
Channel: VR displays – Road to VR
Viewing all 89 articles
Browse latest View live

The Key Technology Behind Varjo’s High-res ‘Bionic Display’ Headset

$
0
0

Varjo is a relatively new name to the VR scene, but the company is certainly making a buzz, having quickly raised some $15 million in venture capital touting the promise of delivering a VR headset achieving retina resolution at the center of the field of view. But how exactly does it work? A new graphic shows the key tech behind the company’s headset.

Today there’s primarily two choices when it comes to the kind of displays to use in VR headsets. The first is a traditional display like the kind that you find in your smartphone. The problem with traditional displays today is that the pixels aren’t yet small enough to be truly invisible. Then there’s microdisplays, which have incredible pixel density, but the displays themselves can’t yet be made large enough for very wide field of view in a VR headset.

So until either traditional displays can shrink their pixels drastically, or microdisplays can be easily made much larger, we’re still quite a way from achieving ‘retina resolution’—having such tiny pixels that they’re invisible to the naked eye—in highly immersive VR headsets.

But Varjo hopes to deliver a stopgap which combines the advantages of traditional displays (wide field of view) with those of microdisplays (high pixel density), to deliver a VR headset with retina resolution (at least in a small portion of the overall field of view).

Combining Displays

The underlying concept is illuminated in a recent animated graphic from the company’s site:

Above you can see (right to left) a diagram of the viewer’s eye, a traditional lens, a moving refraction optic (above), a microdisplay (below), and a traditional display.

As you can see, the refraction optic can move the reflected microdisplay image onto the corresponding section of the traditional display. The idea is that the reflected high-resolution image will always be positioned at the very center of the user’s gaze, with the help of precision eye tracking, while the lower resolution traditional display will fill out the peripheral view where your eye can’t see nearly as much detail. This is very similar to software foveated rendering, except in this case it’s almost like moving the pixels themselves to where they are needed, instead of just rendering in higher quality in a specific area.

This example image from Varjo shows the difference in visual quality between the image from the traditional display and the microdisplay (click to zoom):

Image courtesy Varjo

Moving the High-res Region

The big questions of course: how do you move the refraction optic quickly enough to keep up with the movements of the eye, reliably, and in a space compact enough for a reasonably sized VR headset?

Image courtesy Varjo

For the former, the answer may lie somewhere in the company’s key patent, Display Apparatus and Method of Displaying Using Focus and Context Displays, which describes “actuators” that could be involved in the various moving parts. Additional hints pertaining to how the company hopes to achieve this are likely found in Varjo’s job listing for a “Miniature Mechatronics Expert:”

You will be responsible of designing the actuators and motor controls for our mixed reality device. […] You will participate the development of leading edge motor technologies and design novel actuator mechanics to harness the power of custom designed optics, motors and electronics to reach new fronts in miniature mechatronics.

[…]

Responsibilities

  • Create motor position control algorithms
  • Design position encoder system
  • Set performance targets and requirements for motor units
  • […]

Plans ‘A’ Through ‘H’

The patent, which refers to the traditional display as the “context display” and the microdisplay as the “focus display,” actually covers a wide range of possible incarnations of the tech using varying methods for combining the microdisplay image with the traditional display image, including the use of waveguides, additional prisms, and other display technology entirely, like projection.

Image courtesy Varjo

Correcting Artifacts

Another big question: what artifacts will the optical combination process introduce to both the reflected micro display image and the image from the traditional display?

The patent also touches on that (a clear indicator that artifacts will indeed need to be contended with); it suggests a number of techniques which could help to eliminate distortions, including masking the region of the traditional display that’s directly behind the reflected display, dimming the seams of the two display regions to try to create a more even transition from one to the other, and even intelligently fitting the transition seams to portions of the rendered scene, connecting them like puzzle pieces in an effort to make the seams less pronounced.

– – — – –

Image courtesy Varjo

Varjo has laid out a number of interesting methods for pulling off their “bionic display.” Of course, the devil is always in the details—we’ll be looking forward to our first chance to try their latest prototype and see what it really looks like in practice.

The post The Key Technology Behind Varjo’s High-res ‘Bionic Display’ Headset appeared first on Road to VR.


Correction: Apple, Valve, and LG Didn’t End Up Investing in OLED Microdisplay Maker eMagin

$
0
0

According to documents filed last month with SEC, participation of a new stock issuance of some $10 million by OLED microdisplay maker eMagin was offered to Apple, Valve, and LG, among others, though those companies didn’t end up participating in the deal, Emagin says.

Update (2/12/18): An earlier version of this article stated that Apple, LG, and Valve had participated in Emagin’s new stock offering; according to a press release issued by Emagin today, the companies ultimately didn’t participate in the deal. While the company had filed documents with SEC on the 25th of January listing the companies as “specified investors” in the deal, none of those companies took part in the deal by the time it had closed on January 29th, according to Emagin. We’ve reached out to the company for additional information surrounding the deal and Emagin’s involvement with Apple, Valve, and LG.

According to a report from Bloomberg, “Emagin listed those companies in the filing because it had discussions with [the companies] at industry events,” the company reportedly told the publication.

Original Article (2/10/18): Founded in 1993, Emagin is a producer of OLED microdisplays which have seen deployments in military, medical, industrial, and other sectors. With the rise of AR and VR in the consumer market, Emagin has recently marketed their display technology toward companies building consumer headsets.

The company’s flagship product in this space is a 2,048 × 2,048 OLED microdisplay with a ~70% fill factor, which the company claims will eliminate the ‘screen door effect’ seen on today’s consumer VR headsets.

Microdisplays are very pixel dense, but expensive to manufacture at larger sizes | Image courtesy eMagin

According to SEC filings submitted on January 22nd, Emagin prepared to sell some $10 million in newly issued stock, with Apple, Valve, and LG, among others, listed as the offering’s “specified investors.” According to Emagin, the deal was expected to close on or about January 29th.

We speculated recently that Apple’s latest VR patent for compact VR optics could be intended for use with a microdisplay; this investment could be another clue in favor of that hypothesis.

SEE ALSO
Apple's Latest VR Patent Describes a Compact VR Headset with Eye Tracking

As for Valve, the company’s chief, Gabe Newell, said back in 2017 that he expected VR display technology to make great strides in 2018 and 2019; a timeline which may have been guided by the company’s involvement with Emagin:

“We’re going to go from this weird position where VR right now is kind of low res, to being in a place where VR is higher res than just about anything else, with much higher refresh rates than you’re going to see on either desktops or phones. You’ll see the VR industry leapfrogging any other display technology. You’ll start to see that happening in 2018 and 2019 when you’ll be talking about incredibly high resolutions.”

LG is also known to be in the VR game. The company previously released a mobile microdisplay-based headset, the LG 360 VR, and is further working on a tethered desktop headset which it revealed last year but has kept under wraps since.

– – — – –

Emagin’s SEC filing actually gives us a pretty likely idea of exactly why these companies are involved: volume. In the document the company describes some of its recent business activities.

On the commercial front, we entered into strategic agreements with multiple Tier One consumer product companies for the design and development of microdisplays for consumer head mounted devices and, together with these companies, negotiated with mass production manufacturers for higher volume production capabilities.

That certainly makes it sound like Apple, Valve, LG, and perhaps others, formed something of a coalition to create sufficient demand to help Emagin achieve large enough initial volume for mass production at reasonable prices. The company expects the displays to be available in large quantities beginning in mid-2018.

This zoomed comparison shows the difference in Samsung’s PenTile subpixel layout compared to the RGB stripe approach. | Image courtesy eMagin

Emagin’s latest 2K × 2K OLED microdisplay is said to use 9.3µm square pixels with an RGB stripe subpixel arrangement, and, crucially for AR and VR use, also offers low persistence capabilities thanks to a high frame-rate and global illumination. While the company’s website presently lists the display as being capable of 500 nits, the SEC documents indicate that a much brighter 5,300 nit version has been demonstrated, which the company says “surpasses” the needs of AR and VR headsets.

The post Correction: Apple, Valve, and LG Didn’t End Up Investing in OLED Microdisplay Maker eMagin appeared first on Road to VR.

Hands-on: Varjo ‘Bionic Display’ Headset is a Promising Shortcut to Retina Resolution

$
0
0

Today’s VR headsets may have relatively high resolutions on paper, but when the pixels are stretched across a wide field of view, the effective angular resolution is far lower than what you might expect from a typical 1080p TV or monitor. Unfortunately, a suitable VR display capable of achieving both a wide field of view and retina resolution isn’t readily available yet. Until then, Finland-based VR startup Varjo is using a combination of macrodisplays and microdisplays to put high density resolution to the center of your view without giving up the wide field of view.

The Varjo headset makes use of what the company calls a ‘context display’ and ‘focus display’. The context display is a large macrodisplay with a 1,080 × 1,200 resolution spread across a 100 degree field of view. Alone, it would look almost identical to the fidelity you’d expect from the Oculus Rift or HTC Vive. Varjo’s trick however is putting a microdisplay (the ‘focus display’) with a 1,920 × 1,080 resolution at the center of the headset’s field of view. Although the focus display isn’t tremendously higher resolution than the context display by pixel count, it’s pixels are packed into just 35 degrees horizontally, making it incredibly pixel dense.

Photo by Road to VR

At MWC 2018 I got to check out the Varjo Alpha prototype headset. Inside I saw an extremely high quality image at the center of my field of view which had no noticeable screen door effect. Beyond that 35 degree rectangular area, the resolution drops to the same levels you’d expect from first-gen consumer VR headsets. At the boundary between the focus display and context display, there’s an imperfect transition between the high resolution area and the low resolution area, which looks like a blurry rectangular halo, but it was actually somewhat less jarring than I was expecting.

A rough approximation of how the focus display looks against the context display. Relative fields of view are not to scale. | Photo by Road to VR, based on images courtesy Varjo

The company is using an optical combiner, essentially a two-way mirror, to composite the two displays. Going forward, Varjo hopes to further smooth out the transition between the two displays, the company’s CMO, Jussi Mäkinen, told me, using a combination of both hardware and software refinements.

SEE ALSO
Understanding Pixel Density & Retinal Resolution, and Why It's Important for AR/VR Headsets

The difference in quality between the focus display and the context display is truly night and day. Not only does the focus display not show any noticeable screen door effect, the jump in angular resolution turns otherwise blurry smears into perfectly legible letters, as a virtual standard eye chart placed inside the demo experience made clear. Textures benefited immensely from the improved angular resolution, revealing detail that simply isn’t visible on the lower resolution context display.

In the video above, keep your eye on the lower lines of the eye chart to get an idea of the difference in resolution between the focus display and the context display. The resolution jump is less noticeable here since the camera isn’t capturing the headset’s full field of view. You can’t quite see the boundary artifacts in this video.

The Vajor Alpha prototype was tracked with SteamVR Tracking, which the company plans to continue using going forward. Since this was a handheld demo (no strap on the headset yet), I didn’t do a comprehensive test of the headtracking tracking in its prototype form.

– – — – –

In its current state, even with a static focus display, the benefit of the extra resolution is apparent, and you can almost fool yourself into thinking the entire display is sharp, as long as you consciously try to keep your gaze pointed through the very center of the lens. Of course, course, in practice, your eyes are not always looking perfectly through the center of the lenses, so you won’t always be looking at that super sweet spot of the focus display. And you’ll have to contend with your eye crossing back and forth over the transition point (and back and forth between high res and low res).

Varjo’s long term hope is that they’ll be able to move the focus display in real-time, creating a sort of hardware foveated display, such that the focus display is always at the center of your gaze no matter where you’re looking. That would of course require excellent eye-tracking (and then some) but if they can pull it off, it would make the headset even more compelling because your eye would never have to cross the border between the displays (and always be in that super sweet spot), and your brain might even do a good job of ignoring the border if it’s always a fixed distance from the center of your view.

Achieving an active focus display isn’t likely to be an easy task, though Varjo has patented several potential approaches, mostly involving quickly pivoting the optical combiner about two axes, which we examined recently when we explored the company’s key technology.

Though Varjo CMO Jussi Mäkinen tells me that the company already has prototypes with an active focus display, it’s still up in the air whether or not Varjo’s first commercial product, which is planned for release later this year, will use a static or active focus display.

Photo by Road to VR

Mäkinen told me that Varjo is for now focusing exclusively on enterprise and commercial applications for the headset, which the company initially expects to price between $5,000 and $10,000. He said that the company is actively listening to feedback from its partners about what aspects of the headset are most critical for improvement.

SEE ALSO
Understanding the Difference Between 'Screen Door Effect', 'Mura', & 'Aliasing'

If Varjo can develop an effective and reliable mechanism for active foveation, their headset could be great stopgap for making near retina quality VR headsets with a wide field of view before any single display (per-eye) is ready to ready to deliver such an experience. But we know that both macrodisplay and microdisplay makers are working toward that goal, so where does that leave Varjo once a single display can do it all?

Mäkinen says that Varjo doesn’t just want to make a headset, they want to pioneer the productivity use-case of VR, using a mix of hardware and software. One example he gave was using a totally virtual workspace without the need for physical monitors, something that can really only happen once headset resolution is high enough. He said we can expect more from the company on the software side in the future.

The post Hands-on: Varjo ‘Bionic Display’ Headset is a Promising Shortcut to Retina Resolution appeared first on Road to VR.

Google to Reveal “World’s Highest Resolution OLED-on-glass display” for VR Headsets in May

$
0
0

Google announced at last year’s Society for Information Display (SID) Display Week that the company was actively working on a VR-optimized OLED panel capable of packing in pixels at a density heretofore never seen outside of microdisplays – all at a supposedly ‘wide’ field of view (FOV). Now, it appears that the company is going to talk in detail about that high-resolution OLED at this year’s SID.

Update (1:20 PM ET): More details about Google’s pixel-packed VR display have come to light via the DisplayWeek event schedule: LG engineers will also be co-presenting at the session detailing the display with Google, likely indicating that LG was Google’s partner on the project.

“The world’s highest resolution (18 megapixel, 1443 ppi) OLED-on-glass display was developed. White OLED with color filter structure was used for high-density pixelization, and an n-type LTPS backplane was chosen for higher electron mobility compared to mobile phone displays. A custom high bandwidth driver IC was fabricated. Foveated driving logic for VR and AR applications was implemented.”

As first reported by OLED-infoan advanced copy of the event’s schedule maintains that a talk featuring Google hardware engineer Carlin Vieri will be taking place May 22nd.

The talk is named simply “18 Mpixel 4.3-in. 1443-ppi 120-Hz OLED Display for Wide-Field-of-View High-Acuity Head-Mounted Displays.” Such a display, OLED-Info postulates, could land somewhere around a resolution of 5500 × 3000. If correct, it has a clear advantage over even the highest resolution panels seen in current headsets such as the Samsung Odyssey or the upcoming HTC Vive Pro, both of which pack dual 1440 × 1600 resolution, 3.5 inch AMOLEDs at 90Hz.

Google’s VP of AR/VR Clay Bavor stated last year that the company “partnered deeply with one of the leading OLED manufacturers in the world to create a VR-capable OLED display with 10x more pixels than any commercially available VR display today,” saying that the panel under development would reach 20 megapixels per display – a bit higher than the display featured in the talk listed above, but holding the same implications.

Incorporating such a high-resolution display into a VR headset, Bavor explained, will also create enormous performance challenges of its own, and that data rates could range between 50-100 Gb/sec with current rendering configurations. Foveated rendering combined with eye-tracking is largely believed to be a solution to knocking down the massive graphical requirements for such a display. Yes, Google is researching foveated rendering too.

We’re hoping to find out more come May.

The post Google to Reveal “World’s Highest Resolution OLED-on-glass display” for VR Headsets in May appeared first on Road to VR.

INT Announces 2,228 PPI High Pixel Density AMOLED for VR Headsets

$
0
0

When we saw the news that JDI, the Japanese display conglomerate founded by Sony, Toshiba and Hitachi, were developing a 1001ppi LCD display for VR headsets, it was clear that the race for ever-higher pixel densities was still alive and well. Now, we learn INT, a Taiwan-based display design firm, is developing a 2228 ppi AMOLED specifically designed for VR headsets.

Announced in a fairly sparse press release, not much is known about the company’s 2228 ppi AMOLED, which is built on a glass substrate. INT hasn’t provided any specs outside of the display’s pixel density and the fact that it’s an on-glass AMOLED.

High pixel densities are necessary to reduce screen door effect – the visible lines between pixels, which when magnified by VR lenses, become much more apparent. To boot, the company says the glassbased display “is much more economical and can be made in larger size, thus improve FOV significantly.”

Image courtesy INT

Because INT hasn’t detailed the size of the panel, it’s impossible to say where it fits on the spectrum of VR hardware. It could be a ~3 inch panel that would essentially replace standard displays like you find in the Oculus Rift or HTC Vive, or an incredibly small microdisplay destined to function in headsets such as Varjo’s ‘bionic display’, which uses two displays per eye—a standard resolution ‘context display’ and a much smaller, but higher ppi ‘focus display’ that is mirrored to the fovea region of the eye and synced via eye-tracking to essentially increase the perceived overall resolution.

Considering however the company says it can be produced in a larger size format, we’re hopeful that means it’s possible to manufacture a more wide-reaching standard display size.

For comparison, both the Vive and Rift use a pair of 1080 × 1200 displays with a ppi of ~456. Currently the market leaders in pixel density are Samsung Odyssey and HTC Vive Pro, both with the same Samsung-built panel at ~615 ppi. So the new INT display should have around a 390% higher ppi than Rift/Vive, and around 260% more than HTC Vive Pro/Samsung Odyssey—a staggering increase that would likely require foveated rendering, a technique that displays a VR scene at the center of the user’s photo-receptor-dense fovea, and at its highest resolution.

The image below (from JDI) demonstrates the dramatic increase in acuity that can be had from such high-pixel-dense displays.

Image courtesy JDI

Dubbing it UHPD (Ultra High Pixel Density), INT is slated to show off their display at the Poster Session of the upcoming SID Display Week, which takes place May 22 – 24 in Los Angeles, CA.

Both JDI and Google are presenting high-pixel-density displays at SID Display Week, with Google showing their 1443ppi OLED on-glass display there as well. We’ll certainly be reporting on whatever comes out of it, so check back then.

The post INT Announces 2,228 PPI High Pixel Density AMOLED for VR Headsets appeared first on Road to VR.

Google & LG Detail Next-Gen 1,443 PPI OLED VR Display, ‘especially ideal for standalone AR/VR’

$
0
0

Ahead of SID Display Week, researchers have published details on Google and LG’s 18 Mpixel 4.3-in 1,443-ppi 120Hz OLED display made for wide field of view VR headsets, which was teased back in March.

The design, the researchers claim in the paper, uses a white OLED with color filter structure for high density pixelization and an n‐type LTPS backplane for faster response time than mobile phone displays.

The researchers also developed a foveated pixel pipeline for the display which they say is “appropriate for virtual reality and augmented reality applications, especially mobile systems.” The researchers attached to the project are Google’s Carlin Vieri, Grace Lee, and Nikhil Balram, and LG’s Sang Hoon Jung, Joon Young Yang, Soo Young Yoon, and In Byeong Kang.

The paper maintains the human visual system (HVS) has a FOV of approximately 160 degrees horizontal by 150 degrees vertical, and an acuity of 60 pixels per degree (ppd), which translates to a resolution requirement of 9,600 × 9,000 pixels per eye. At least in the context of VR, a display with those exact specs would match up to a human’s natural ability to see.

Take a look at the specs below to see a comparison for the panel as configured vs the human’s natural visual system.

Human Visual System vs. Google/LG’s Display

Specification Upper bound As built
Pixel count (h × v) 9,600 × 9,000 4,800 × 3,840
Acuity (ppd) 60 40
Pixels per inch (ppi) 2,183 1,443
Pixel pitch (µm) 11.6 17.6
FoV (°, h × v) 160 × 150 120 × 96

 

To reduce motion blur, the 120Hz OLED is said to support short persistence illumination of up to 1.65 ms.

To drive the display, the paper specifically mentions the need for foveated rendering, a technique that uses eye-tracking to position the highest resolution image directly at the eye’s most photon receptor-dense section. “Foveated rendering and transport are critical elements for implementation of standalone VR HMDs using this 4.3′′ OLED display,” the researchers say.

While it’s difficult to communicate a display’s acuity on a webpage and viewed on traditional monitors, which necessarily includes a modicum of image compression, the paper also includes a few pictures of the panel in action.

Without VR optic (bare panel) – Image courtesy Google & LG

The researchers also photographed the panel with a VR optic to magnify what you might see if it were implemented in a headset. No distortion correction was applied to the displayed image, although the quality is “very good with no visible screen door effect even when viewed through high quality optics in a wide FoV HMD,” the paper maintains.

With VR optic – Image courtesy Google & LG

Considering Google and LG say this specific panel configuration, which requires foveated rendering, is “especially ideal for mobile systems,” the future for mobile VR/AR may be very bright (and clear) indeed.

Both the Japan-based JID group and China-based INT are presenting their respective VR display panels at SID Display Week, which are respectively a 1,001 PPI LCD (JDI) and a 2,228 PPI AMOLED (INT).

We expect to hear more about Google and LG’s display at today’s SID Display Week talk, which takes place on May 22nd between 11:10 AM – 12:30 PM PT. We’ll update this article accordingly. In the meantime, check out the specs below:

Google & LG New VR Display Specs

Attribute Value
Size (diagonal) 4.3″
Subpixel count
3840 × 2 (either RG or BG) × 4800
Pixel pitch 17.6 µm (1443 ppi)
Brightness
150 cd/m2 @ 20% duty
Contrast >15,000:1
Color depth 10 bits
Viewing angle² 30°(H), 15° (V)
Refresh rate 120 Hz

The post Google & LG Detail Next-Gen 1,443 PPI OLED VR Display, ‘especially ideal for standalone AR/VR’ appeared first on Road to VR.

Here’s a Look Inside Both Samsung & JDI’s High Pixel Density VR Panels

$
0
0

SID Display Week played host to a number of big names in display technology showing off their respective high pixel density VR panels. UploadVR’s Ian Hamilton was on the scene, and managed to get a pretty good capture of both Samsung’s and JDI’s panels with his iPhone 8 camera.

Samsung showed off their 2.43-inch, 3,840 × 2,160 resolution (120Hz) panel through the lenses of a VR headset, although it appears from UploadVR‘s video that only static imagery was shown to SID Display Week attendees. The panel on display features 1,200 pixels per inch (PPI), and although not explicitly mentioned, is probably a derivation of their OLED VR panel first shown at Mobile World Congress last year.

Japan Display Inc. (JDI), a display conglomerate created by Sony, Toshiba, and Hitachi, debuted their high pixel density display at Display Week, showing off the 3.25-inch, 2,160 × 2,432 resolution TFT-LCD (120Hz). JDI’s panel features 1,001 PPI. As opposed to Samsung’s display, attendees were treated to a moving scene.

According to a recent tweet by Hamilton, the screen door effect (SDE) was is still apparent on Samsung’s display, while JDI’s display had a marked reduction of SDE to the point where he couldn’t see it at all.

These displays likely take a good amount of graphical rendering power to run at such high resolutions and fresh rates, so it makes sense why they were displayed in cases rather than a wearable VR headsets.

Foveated rendering is touted as a solution to the current ‘brute force’ method of displaying the full resolution of a scene across the majority of the display, and utilizes eye-tracking and a dedicated rendering pipeline in order to show the highest resolution image in the center of the photoreceptor-dense part of the eye, the fovea, making these sorts of panels viable even for mobile VR headsets.

As an added bit of info: both panels were shown using standard refractive lenses, like those found in Gear VR, and not the more common Fresnel lenses found in current generation mobile VR headsets such as Google Daydream, Oculus Go, and HTC Vive Focus—something likely done to not muddy the view with visual artifacts associated with Fresnel lenses such as ‘god rays’, or faint streaks or glares of light that are most noticeable when looking at bright objects against a dark background.

Our friends over at UploadVR also have an interesting piece on Google And LG’s new 1,443 PPI VR display, also debuted at SID earlier this week.

The post Here’s a Look Inside Both Samsung & JDI’s High Pixel Density VR Panels appeared first on Road to VR.

LG Develops AI-based Tech to Reduce Latency & Motion Blur in VR Displays

$
0
0

South Korean tech giant LG Display and a team from Sogang University in Seoul have collaborated on a new AI-based content creation technology that’s designed to address the issue of latency and motion blur in VR headsets – well ahead of the mounting race for higher and higher display resolutions.

For good reason, VR hardware developers are adamant about low motion-to-photon latency, or the amount of time between an input movement like a head turn, and when the screen updates to reflect that movement. High latency between what the user does and what they see can cause nausea.

Ideally that latency should be under 20ms, and while current consumer VR headsets have mostly solved this issue, a new wave of ever higher resolution headsets presents the same engineering challenge yet again. VR display latency and motion blur, or what happens when a display’s pixels don’t illuminate fast enough, are the two big targets for LG and Sogang’s new AI tech.

Motion blur caused by full persistence display – Image courtesy Oculus

“The core of the newly developed technology is an algorithm that can generate ultra-high resolution images from low-resolution ones in real time. Deep learning technology makes this conversion possible without using external memory devices,” the team told Business Korea.

LG and the Sogang University team’s technology is said to both boost power efficiency and make high resolutions possible on mobile headsets. The team says their AI-based setup “cuts motion to photon latency and motion blurs to one fifth or less the current level by slashing system loads when operating displays for VR.”

To test the system’s latency and motion blur, the team also created a device sporting a precision motor that simulates human neck movements and an optical system based on the human visual system.

“This study by LG Display and Sogang University is quite meaningful in that this study developed a semiconductor which accelerates with low power realized through AI without an expensive GPU in a VR device,” said professor Kang Seok-ju, who has carried out this study since 2015 and leads the research team.

LG recently partnered with Google to produce an 18 Mpixel, 4.3-in 1,443-ppi 120Hz OLED display made for wide field of view VR headsets, showing off their display at SID Display Week a few days ago. Google claims the panel is the “world’s highest resolution OLED-on-glass display”

The post LG Develops AI-based Tech to Reduce Latency & Motion Blur in VR Displays appeared first on Road to VR.


DigiLens is Developing a Waveguide Display for 150 Degree XR Headsets

$
0
0

DigiLens, a developer of transparent waveguide display technology, says it’s working toward a waveguide display which could bring a 150 degree field of view to AR and VR (or XR) headsets. The company expects the display will be available in 2019.

Founded in 2005, DigiLens has developed a proprietary waveguide manufacturing process which allows the company to “print” light manipulating structures (Bragg gratings) into a thin and transparent material wherein light can be guided along the optic and be made to project perpendicularly, forming an image in the user’s eye. While DigiLens isn’t the only company which makes waveguide displays, they claim that their process offers a steep cost advantage compared to competitors. The company says they’ve raised $35 million between its Series A and B investment rounds.

DigiLens Founder & CTO Jonathan Waldern| Image courtesy DigiLens

While DigiLens’ displays have primarily been used in HUD-like applications, the company is increasingly positioning its wares toward the growing wearable, AR, and VR industries. At AWE 2018 last week, DigiLens Founder & CTO Jonathan Waldern told me that the company expects to offer a waveguide display suitable for AR and VR headsets which could offer a 150 degree field of view between both eyes. He said that a single display could be suitable for AR and VR modes in the same headset by utilizing a liquid crystal blackout layer which can switch between transparent and opaque, something which DigiLens partner Panasonic has developed. A clip-on light blocker or other type of tinting film ought to be suitable as well.

Key to achieving such a wide field of view will be condensing the display down to a single grating, Waldern said. Until recently the company’s displays used three gratings (one each for red, green, and blue colors), but DigiLens recently announced that their latest displays can use just two gratings by splitting the green channel between the red and blue layers.

Image courtesy DigiLens

As waveguides are limited by the refractive index of the optical material, moving all colors into a single grating will allow multiple gratings to instead be used to transmit multiple slices of the image along each layer and then be aligned during projection, offering a wider field of view than a single grating could support.

While DigiLens plans to make this wide field of view waveguide display available in 2019, it will take longer until we’re likely to see it in a commercially available headset, since it will be up to another company to decide to build a headset using the display. And while DigiLens doesn’t make headsets themselves, it’s likely that it will develop a reference headset as it has done for its other displays.

Image courtesy DigiLens

With its current dual-grating display, DigiLens has partnered with smart helmet company Sena to offer its MonoHUD display (25 degree diagonal FOV) as a HUD for motorcyclists. That product will soon launch priced at $400, which DigiLens says has been made possible by a new “inkjet coating manufacturing process,” which brings improved visual quality and “significantly less cost,” so there’s hope that their wide FOV display could reach consumer price points as well.

Image courtesy DigiLens

The company is also presently developing a reference headset for what they call EyeHUD, a mono display in a glasses-like form factor that’s designed as a smartphone companion device.

The post DigiLens is Developing a Waveguide Display for 150 Degree XR Headsets appeared first on Road to VR.

Samsung “Anti SDE” Trademark Suggests New VR Display Tech Coming to Market

$
0
0

Samsung filed an interesting trademark with the European Union Intellectual Property Office recently which suggests the company is working on a new AMOLED display for VR that specifically addresses the screen door effect (SDE).

The screen door effect is a visual artifact of displays such as those used in the Rift and Vive. Because there are unlit gaps between pixels or subpixels, they can become more visible when viewed under VR optics, creating unsightly grid-like lines which appear like fine linen mesh, or screen doors between the user and the content.

Uncovered by Dutch website GalaxyClub.nl, the Korean tech giant applied for the name ‘Anti SDE AMOLED’ last Friday. There isn’t any supporting information outside the actual name of the supposed display, but considering it directly references SDE, it’s very likely the company is taking its next step in making more VR-specific hardware.

SEE ALSO
Here's a Look Inside Both Samsung & JDI's High Pixel Density VR Panels

There’s a number of ways to reduce SDE in VR headsets. One way is by creating higher ‘fill factor’ panels, which reduces the spaces between pixels and subpixels. Packing in the pixels at a high PPI (pixel per inch) density, while not a guaranteed way to reduce perceived SDE, does help overall as well. Like with OSVR HDK2, panel makers can also apply a diffusion layer on top of the display, which diffuses light emitted by pixels in order to compensate for the unlit spaces between them (less desirable because it reduces clarity).

Healthy speculation: Samsung showed off a 2.43-inch, 3,840 × 2,160 resolution (120Hz) panel at SID Display Week in May, which featured 1,200 PPI – a clear 2.6 times higher PPI than Rift or Vive’s 460 PPI. It’s possible the company is basing their work off this prototype with the intention of bringing it to market via other manufacturers (Vive Pro contains Samsung displays), or use it in their own bespoke VR headset. Again, we just don’t know, but we’ll certainly be keeping our eyes peeled for a what could be a solution to the screen door effect.

The post Samsung “Anti SDE” Trademark Suggests New VR Display Tech Coming to Market appeared first on Road to VR.

New Smartphone-tethered Qualcomm Headset Has 2x the Pixels of Vive Pro

$
0
0

This week at CES 2019 Qualcomm is showing off a new VR reference headset sporting impressive new displays that may well define the next wave of VR headsets.

Qualcomm has been a somewhat silent enabler of most of the recent and upcoming standalone VR headsets, not only because it makes the Snapdragon chip that’s central to many of these devices, but also because of its ‘HMD Accelerator’ program which helps companies rapidly bring VR headsets to market by supplying reference designs and pairing companies with partner solutions and capable manufacturers.

Qualcomm’s reference designs act as a jumping-off point for companies to craft a headset to their needs, which could be as simple as slapping a logo on the side, or as extensive as new industrial design, customization of key components, or adding entirely new tech that’s not part of the original reference design. In many cases though, the essential foundation of the reference design can be felt in the end product.

Qualcomm and HTC have worked closely together in the past on Focus, HTC’s first standalone headset, which bore many of the hallmark features found in Qualcomm’s Snapdragon 835 reference headset. Lenovo’s Mirage Solo and a handful of other headsets also share a lineage with one of Qualcomm’s reference designs.

That’s a long way to say this: looking at Qualcomm’s latest reference headsets is a good way to get a preview of devices that are on their way to market.

Photo by Road to VR

At CES 2019 this week we met with Qualcomm who was demonstrating what they called a new reference headset that (surprise surprise) had a detachable tether to a smartphone—the feature that HTC massively teased this week with Cosmos, but wasn’t ready to talk about. The USB-C connector on the Qualcomm reference headset could also easily be plugged into a PC, just like HTC says Cosmos will be able to do.

Note: While Qualcomm called the headset a reference design, it appeared to be a newer, unannounced prototype of Acer’s OJO headset, though we gather this is also based on Qualcomm’s latest reference design for smartphone-tetherable headsets designed for Snapdragon 855 devices. For ease of discussion, I’m going to stick to calling this the ‘Qualcomm reference headset’ for now, because the company wasn’t sharing details about how Acer/Quanta were involved.

The similarities don’t stop there. The Qualcomm reference headset also has a flip-up hinge (just like Cosmos), and detachable headphones (just like Cosmos) that looked very similar to those found on the Vive Deluxe Audio strap.

With these similarities, and HTC’s history of working with Qualcomm, the reference headset is almost certainly forms the basis of Vive Cosmos, which gives us a number of big hints about what Cosmos and other near-term headsets could look like.

Which brings us to the display. HTC has said almost nothing about the Cosmos display except that it’s their “sharpest screen yet,” and that the new displays are “real RGB displays” with “minimal screen door effect.”

That’s exactly what I saw in the Qualcomm reference headset, which had a very impressive pair of displays which I don’t believe I’ve ever seen before. These new displays are LCD and running up to 90Hz with a resolution of 2,160 × 2,160 per-eye, a huge step up in pixels (2x!) over leading displays in headsets like the Vive Pro at ‎1,440 x 1,600 (even before talking about subpixels).

Additionally, these new displays appear to be RGB and have excellent fill-factor, offering the least screen door effect—and sharpest image—I’ve seen in a headset using any display of this type. The field of view on the Qualcomm reference headset looked a little tighter than similar headsets though, likely around 85 degrees, which would have slightly exaggerated the sharpness and minimal screen door. Even so, if these displays can support ~100 degrees like many other headsets, they’d still have a big edge in sharpness and minimal screen door.

SEE ALSO
Understanding the Difference Between 'Screen Door Effect', 'Mura', & 'Aliasing'

It’s not clear if Cosmos will in fact use this 2,160 × 2,160 display. HTC could opt for another display, but if Cosmos will have their “sharpest screen yet,” then it can’t be the Vive Pro display—and if it’s not this new display, then we’re not sure which it would be because there’s no VR displays in any existing or upcoming headsets (to our knowledge) that fall between those two resolutions.

The Qualcomm reference headset also clues us in to what the smartphone tethering function is likely to look like on Cosmos, and the device that would power it.

Photo by Road to VR

The Qualcomm reference headset was plugged into a Qualcomm MTP-8150 (an early hardware test kit) based on Snapdragon 855 with 5G hardware built in. The device was powering the headset, rendering the content, and handling the processing necessary for the optical 6DOF tracking. Qualcomm had a local 5G network set up which was streaming volumetric video content from NextVR (which looked really impressive on the high res display) as a 5G proof of concept.

The MTP-8150 is like a reference device for a smartphone, except before all the hardware has been compacted into a sleek form-factor. Phone makers use MTPs to test hardware while designing news phones.

So for Cosmos, the play ahead of HTC very much seems like the company plans to launch a new phone—probably built on Snapdragon 855 and including 5G—that will be compatible with Vive Cosmos. That would explain why HTC wasn’t ready to talk about the headset’s smartphone compatibility—because they’ve yet to announce the phone that will power it.

HTC says they’ll have more to say about Cosmos later this year—pay attention when they gear up to launch their next phone, because that’ll probably be when we start to hear specifics on Cosmos.

The post New Smartphone-tethered Qualcomm Headset Has 2x the Pixels of Vive Pro appeared first on Road to VR.

Founded by CERN Engineers, CREAL3D’s Light-field Display is the Real Deal

$
0
0

Co-founded by former CERN engineers who contributed to the ATLAS project at the Large Hadron Collider, CREAL3D is a Switzerland-based startup that’s created an impressive light-field display that’s unlike anything in an AR or VR headset on the market today.

At CES last week we saw and wrote about lots of cool stuff. But hidden in the less obvious places we found some pretty compelling bleeding-edge projects that might not be in this year’s upcoming headsets, but surely paints a promising picture for the next next-gen of AR and VR.

One of those projects wasn’t in CES’s AR/VR section at all. It was hiding in an unexpected place—one and half miles away, in an entirely different part of the conference—blending in as two nondescript boxes on a tiny table among a band of Swiss startups representing at CES as part of the ‘Swiss Pavilion’.

It was there that I met Tomas Sluka and Tomáš Kubeš, former CERN engineers and co-founders of CREAL3D. They motioned to one of the boxes, each of which had an eyepiece to peer into. I stepped up, looked inside, and after one quick test I was immediately impressed—not with what I saw, but how I saw it. But it’ll take me a minute to explain why.

Photo by Road to VR

CREAL3D is building a light-field display. Near as I can tell, it’s the closest thing to a real light-field that I’ve personally had a chance to see with my own eyes.

Light-fields are significant to AR and VR because they’re a genuine representation of how light exists in the real world, and how we perceive it. Unfortunately they’re difficult to capture or generate, and arguably even harder to display.

Every AR and VR headset on the market today uses some tricks to try to make our eyes interpret what we’re seeing as if it’s actually there in front of us. Most headsets are using basic stereoscopy and that’s about it—the 3D effect gives a sense of depth to what’s otherwise a scene projected onto a flat plane at a fixed focal length.

Such headsets support vergence (the movement of both eyes to fuse two images into one image with depth), but not accommodation (the dynamic focus of each individual eye). That means that while your eyes are constantly changing their vergence, the accommodation is stuck in one place. Normally these two eye functions work unconsciously in sync, hence the so-called ‘vergence-accommodation conflict’ when they don’t.

More simply put, almost all headsets on the market today are displaying imagery that’s an imperfect representation of how we see the real world.

On more advanced headsets, ‘varifocal’ approaches dynamically shift the focal length based on where you’re looking (with eye-tracking). Magic Leap, for instance, supports two focal lengths and jumps between them as needed. Oculus’ Half Dome prototype does the same, and—from what we know so far—seems to support a wide range of continuous focal lengths. Even so, these varifocal approaches still have some inherent issues that arise because they aren’t actually displaying light-fields.

So, back to the quick test I did when I looked through the CREAL3D lens: inside I saw a little frog on a branch very close to my eye, and behind it was a tree. After looking at the frog, I focused on the tree which came into sharp focus while the frog became blurry. Then I looked back at the frog and saw a beautiful, natural blur blossom over the tree.

Above is raw, through-the-lens footage of the CREAL3D light-field display in which you can see the camera focusing on different parts of the image. (CREAL3D credits the 3D asset to Daniel Bystedt).

Why is this impressive? Well, I knew they weren’t using eye-tracking, so I knew what I was seeing wasn’t a typical varifocal system. And I was looking through a single lens, so I knew what I was seeing wasn’t mere vergence. This was accomodation at work (the dynamic focus of each individual eye).

The only explanation for being able to properly accommodate betweentwo objects with a single eye (and without eye-tracking) is that I was looking at a real light-field—or at least something very close to one.

That beautiful blur I saw was the area of the scene not in focus of my eye, which can only bring one plane into focus at a time. You can see the same thing right now: close one eye, hold a finger up a few inches from your eye and focus on it. Now focus on something far behind your finger and watch as your finger becomes blurry.

This happens because the light from your finger and the light from the more distant objects is entering your eye at different angles. When I looked into CREAL3D’s display, I saw the same thing, for the same reason—except I was looking at a computer generated image.

A little experiment with the display really drove this point home. Holding my smartphone up to the lens, I could tap on the frog and my camera would bring it into focus. I could also tap the tree and the focus would switch to the tree while the frog became blurry. As far as my smartphone’s camera was concerned… these were ‘real’ objects at ‘real’ focal depths.

Through-the-lens: focusing on the free. | Image courtesy CREAL3D

That’s the long way of saying (sorry, light-fields can be confusing) that light-fields are the ideal way to display virtual or augmented imagery—because they inherently support all of the ‘features’ of natural human vision. And it appears that CREAL3D’s display does much of the same.

But, these are huge boxes sitting on a desk. Could this tech even fit into a headset? And how does it work anyway? Founders Sluka and Kubeš weren’t willing to offer much detail on their approach, but I learned as much as I could about the capabilities (and limitations) of the system.

The ‘how’ part is the least clear at this point. Sluka would only tell me that they’re using a projector, modulating the light in some way, and that the image is not a hologram, nor are they using a microlens array. The company believes this to be a novel approach, and that their synthetic light-field is closer to an analog light-field than any other they’re aware of.

SEE ALSO
Facebook Open-sources DeepFocus Algorithm for More Realistic Varifocal VR Rendering

Sluka tells me that the system supports “hundreds of depth-planes from zero to infinity,” with a logarithmic distribution (higher density of planes closer to the eye, and lower density further). He said that it’s also possible to achieve a depth-plane ‘behind’ the eye, meaning that the system can correct for prescription eyewear.

The pair also told me that they believe the tech can be readily shrunk to fit into AR and VR headsets, and that the bulky devices shown at CES were just a proof of concept. The company expects that they could have their light-field displays ready for VR headsets this year, and shrunk all the way down to glasses-sized AR headsets by 2021.

At CES CREAL3D showed a monocular and binocular (pictured) version of their light-field display. | Photo by Road to VR

As for limitations, the display currently only supports 200 levels per color (RBG), and increasing the field of view and the eyebox will be a challenge because of the need to expand the scope of the light-field, though the team expects they can achieve a 100 degree field of view for VR headsets and a 60–90 degree field of view for AR headsets. I suspect that generating synthetic lightfields in real-time at high framerates will also be a computational challenge, though Sluka didn’t go into detail about the rendering process.

Through-the-lens: focusing on the near pieces. The blur scene in the background is not generated, it is ‘real’, owed to the physics of light-fields. | Image courtesy CREAL3D

It’s exciting, but early for CREAL3D. The company is a young startup with 10 members so far, and there’s still much to prove in terms of feasibility, performance, and scalability of the company’s approach to light-field displays.

Sluka holds a Ph.D in Science Engineering from the Technical University of Liberec in the Czech Republic. He says he’s a multidisciplinary engineer, and he has the published works to prove it. The CREAL3D team counts a handful of other Ph.Ds among its ranks, including several from Intel’s shuttered Vaunt project.

Sluka told me that the company has raised around $1 million in the last year, and that the company is in the process of raising a $5 million round to further grow the company and its development.

The post Founded by CERN Engineers, CREAL3D’s Light-field Display is the Real Deal appeared first on Road to VR.

AUO to Showcase 1,688 PPI Display with HDR for VR Headsets

$
0
0

Taiwan-based display and electronics maker AUO plans to showcase a range of new displays this week, including a 2.9-inch 3.5K × 3.5K display with HDR capabilities for VR headsets.

AUO announced Monday that it plans to showcase its newest displays during the annual Display Week event this week in Silicon Valley. Among its latest wares is a 2.9-inch LTPS LCD display made for VR headsets.

Beyond simply having a very impressive resolution of 3,456 × 3,456 (1,688 PPI) per display, AUO says the new display uses mini LED backlighting which affords it up to 2,304 dimming zones for HDR.

HDR (High Dynamic Range) is the ability of a display to produce ranges of brightness that far exceed standard displays, thereby allowing the display to more realistically portray varying levels of brightness, especially for high brightness content like scenes with bright sunlight, fire, explosions, and more. No commercially available VR headsets offers HDR capabilities.

SEE ALSO
NVIDIA Demonstrates Experimental "Zero Latency" Display Running at 1,700Hz

Though the HDR capabilities of AUO’s display can’t be controlled on a per-pixel basis, the 2,304 dimming zones spread across the display can be adjusted individually to boost brightness where needed according to the current frame.

Other specs of the display are still unknown: refresh rate, response time, contrast ratio, and response time of the HDR zones, and it isn’t clear what challenges the latter could pose for critical VR-specific display characteristics like low-persistence.

The company says that some of its other new LTPS LCD displays (not intended for VR) boast a 120Hz refresh rate, 8.3ms response time, and up to 1,000 nits peak brightness, though it isn’t clear if these specs are shared by the VR display.

AUO plans to show the new HDR VR display this week at Display Week, though hasn’t spoken of price or availability of the display.

The post AUO to Showcase 1,688 PPI Display with HDR for VR Headsets appeared first on Road to VR.

JBD Shows Micro LED Display for AR/VR with Absurd 3,000,000 Nits Brightness

$
0
0

This week at CES 2020, China-based Jade Bird Display (JBD) revealed its latest portfolio of micro LED displays which the company is positioning as an ideal fit for AR and VR devices. Among the company’s miniscule displays is a one that’s smaller than a penny but capable of a blinding 3,000,000 nits.

Founded in 2015, JBD has been working to commercialize micro LED display technology. The company says its focus is on creating the “smallest, brightest, and most efficient micro-display panels,” and is “currently transitioning from a research-and-development phase into a manufacturing-and-sales phase.”

At CES 2020 this week, the company demonstrated its brightest and most pixel-dense displays to date. The displays, while still just monochromatic, could disrupt the design of AR and VR headsets thanks to their extreme brightness.

Blinded By the Light

Photo by Road to VR

On the bright end of the spectrum is the JBD5UM720P-G, a 1,280 × 720 micro LED display capable of an absolutely absurd 3,000,000 nits of brightness. It’s hard to even put that number into a meaningful context, but I’ll try.

A typical computer monitor is around 300 nits. The iPhone 11 display is rated at 625 nits. An HDR TV can push 2,000 nits.

3,000,000 would literally be painful (and even dangerous) to look at… so who the hell needs that much brightness? Well, it turns out that having an extremely bright display source means greatly reducing a significant optical design constraint: transparency.

Optics generally need to be highly transparent, especially when using novel compact designs (like the kind you’d want in a small form-factor headset) which compress the optical path by bouncing light back and forth many times. Every bounce and pass through a lens losses light based on the material’s transparency of reflectivity, dimming the image as the light progresses along the optical path. Especially for those AR headsets which are intended to be used in full outdoor daytime brightness, display brightness is a serious challenge.

With 3 million nits of brightness at the source, the optical path doesn’t need to worry nearly as much about light efficiency, potentially allowing the use of cheaper lenses and more complex optical designs which can instead optimize for other factors. With 3 million nits brightness, the optical path of an AR or VR headset could be just 0.01% efficient and you’d still get a whopping 3,000 nits out the other side.

And if you don’t need that absolutely absurd amount of brightness (because your optics have even, say, 10% efficiency), you’ll can still benefit by running the display at a much lower brightness and saving on power.

A Penny For 2,560 × 1,440 Thoughts

Photo by Road to VR

On the extreme pixel density side of things, the JDB25UMFHD-B packs an incredible 2,560 × 1,440 pixels into a display smaller than a penny—just 0.31′ diagonally. At 10,000 pixels per inch, the distance between pixels is 2.5 micrometers. Just to remind you, micrometers are one order of magnitude larger than nanometers (there’s 1,000 nanometers in a micrometer) and one order of magnitude smaller than millimeters (1,000 micrometers in a millimeter).

And if you need brightness from this display, fear not, the JBD25UMFHD-B is still capable of a blinding 150,000 nits.

Though it’s monochromatic, this display is high performance; JBD says it’s got a blistering 360Hz refresh rate and 10,000:1 contrast ratio.

Limitations & Applications

Today, JBD’s micro LED displays are monochromatic and only offer 256 color levels which significantly limits their use-cases for the time being—you won’t be watching video playback, browsing the web, or playing full FOV games on displays like these any time soon.

However, they could be absolutely ideal for AR glasses which aim to communicate raw information through text, symbols, and other spatial elements. Use-cases like displaying a line and a bright arrow floating down the road to show the next turn on your GPS route, projecting a control interface onto your palm, or floating your latest text message in front of you could all be extremely compelling if done correctly in a headset equipped with these displays.

Coupled with great headtracking, the 360Hz refresh rate of the JBD25UMFHD-B (and brightness capable of matching any brightness levels seen in the real world) could go a long way toward making AR elements to look absolutely locked to the real world around you.

Oculus concept for a waveguide-based VR headset | Image courtesy Facebook

JBD says it’s working on bichromatic and trichromatic versions of its micro LED displays, which would successively unlock a broader range of use-cases, including the potential to be used in compact VR headsets, like the concept VR glasses Oculus showed back in 2018.

The post JBD Shows Micro LED Display for AR/VR with Absurd 3,000,000 Nits Brightness appeared first on Road to VR.

Hands-on: CREAL is Shrinking Its Light-field Display for AR & VR Headsets

$
0
0

Switzerland-based CREAL is developing a light-field display which it believes will fit into VR headsets and eventually AR glasses. An earlier tech demo showed impressive fundamentals, and this week at CES 2020 the company revealed its progress toward shrinking its tech toward a practical size.

Co-founded by former CERN engineers, CREAL is building a display that’s unlike anything in AR or VR headsets on the market today. The company’s display tech is the closest thing I’ve seen to a genuine light-field.

Why Light-fields Are a Big Deal

Knowing what a light-field is and why it’s important to AR and VR is key to understanding why CREAL’s tech could be a big deal, so let me drop a quick primer here:

Light-fields are significant to AR and VR because they’re a genuine representation of how light exists in the real world, and how we perceive it. Unfortunately they’re difficult to capture or generate, and arguably even harder to display.

Every AR and VR headset on the market today uses some tricks to try to make our eyes interpret what we’re seeing as if it’s actually there in front of us. Most headsets are using basic stereoscopy and that’s about it—the 3D effect gives a sense of depth to what’s otherwise a scene projected onto a flat plane at a fixed focal length.

Such headsets support vergence (the movement of both eyes to fuse two images into one image with depth), but not accommodation (the dynamic focus of each individual eye). That means that while your eyes are constantly changing their vergence, the accommodation is stuck in one place. Normally these two eye functions work unconsciously in sync, hence the so-called ‘vergence-accommodation conflict’ when they don’t.

On more advanced headsets, ‘varifocal’ approaches dynamically shift the focal length based on where you’re looking (with eye-tracking). Magic Leap, for instance, supports two focal lengths and jumps between them as needed. Oculus’ Half Dome prototype does the same, and—from what we know so far—seems to support a wide range of continuous focal lengths. Even so, these varifocal approaches still have some inherent issues that arise because they aren’t actually displaying light-fields.

More simply put, almost all headsets on the market today are displaying imagery that’s an imperfect representation of how we see the real world. CREAL’s approach aims to get us a several steps closer.

That’s why I was impressed when I saw their tech demo at CES 2019. It was a huge, hulking box, but it generated a light-field that with one eye (and without eye-tracking) I could focus on objects of arbitrary depths (which means that accomodation, the focusing of the lens of the eye, works just like when you’re looking at the real world).

Above is raw, through-the-lens footage of the CREAL light-field display in which you can see the camera focusing on different parts of the image. (CREAL credits the 3D asset to Daniel Bystedt).

Slimming Down for AR & VR

At CES 2020 this week, CREAL showed its latest progress toward shrinking the tech to fit into AR and VR headsets.

Photo by Road to VR

Though the latest prototype isn’t yet on a head-mount, the company has shrunk the display and projection module (the ‘optical engine’) enough that it could reasonably fit on a heard-worn device. The current bottleneck which is keeping it on a static mount is the electronics required to drive the optical engine which are housed in a large box.

Photo by Road to VR

Shrinking those driving electronics is the next step; on that front, the company told me it already has a significantly reduced board which in the future will give way to an ASIC (a tiny chip) which could fit into a glasses-sized AR headset.

CREAL’s ‘benchmark’ tech demo | Photo by Road to VR

Looking through their CES 2020 demo, the company showed that they had replicated their light-field technology in a much smaller package, though the demo had a much smaller eye-box, field of view, and lower resolution than what could be seen through their much larger demo.

CREAL told me it intends to expand the field of view on the compact optical engine by projecting additional non-light-field imagery around the periphery.

This is very similar to the concept behind Varjo’s ‘retina resolution’ headset, which puts a high resolution display in the center of the view while filling out the periphery with lower resolution imagery. Except, where Varjo needs additional displays, CREAL says it can project the lower fidelity peripheral views from the same optical engine as the light-field itself.

The company explained that the reason for doing it this way (rather than simply showing a larger light-field) is that it reduces the computational complexity of the scene by shrinking the portion of the image which is a genuine light-field. This is ‘foveated rendering’, light-field style.

CREAL hopes to cover the entire fovea—the small portion in the center of your eye’s view which can see in high detail and color—with the light-field. The ultimate goal, then, would be to use eye-tracking to keep the central light-field portion of the view exactly aligned with the eye as it moves. If done right, his could make it feel like the entire field of view is covered by a light-field.

That’s all theoretically possible, but execution will be anything but easy.

A growing question is what level of quality the display tech can ultimately achieve. While the light-field itself is impressive, the demoes so far don’t show good color representation or particularly high resolution. CREAL has been somewhat hesitant to detail exactly how their light-field display works, which makes it difficult for me to tell what might be a fundamental limitation rather than a straightforward optimization.

VR Before AR

The immediate next step, the company tells me, is to move from the current static demo to a head-mounted prototype. Further in the future the goal is to shrink things toward a truly glasses-sized AR device.

A mockup of the form-factor CREAL believes it can achieve in the long-run (this anticipates off-board compute and and power). | Photo by Road to VR

Before the tech hits AR glasses though, CREAL thinks that VR headsets will be the first stop for its light-field tech considering more generous space requirements and a number of other challenges facing AR glasses (power, compute, tracking, etc).

CREAL doesn’t expect to bring its own headset to market, but is instead positioning itself to work with partners and eventually license its technology for use in their headsets. Development kits are available today for select partners, the company says, though it will likely still be a few years yet before the tech will be ready for prime time.

The post Hands-on: CREAL is Shrinking Its Light-field Display for AR & VR Headsets appeared first on Road to VR.


JDI Starts Mass Production on 1,058 ppi High Pixel Density LCD for VR Glasses

$
0
0

Japan Display Inc. (JDI), a display conglomerate created by Sony, Toshiba, and Hitachi, today announced the mass production of a new high pixel density, 2.1-inch 1,058 LCD display created for VR ‘glasses’ style headsets.

The low temperature polysilicon (LTPS) TFT-LCD panel is said to use a special optical design that is intended to appeal to manufacturers looking to build smaller, lighter glasses-type headsets. Notably, the company says in a press release that its new display is used in VR glasses that have already been introduced to the market.

The company’s new 2.1-inch 1,058 ppi panel boasts a 1,600 × 1,600 resolution in its square format; JDI is also offering variants with corner-cut shapes. Clocked at 120Hz, the panel has a 4.5 ms response time, global blinking backlights, and a brightness of 430 nits.

Although unconfirmed at this time, Pico’s impressive VR Glasses prototype unveiled at CES earlier this year included a 1,600 × 1,600 panel, albeit clocked at 90Hz, which likely has more to do with the constraints of a mobile chipset’s ability to render at a supposed full 120Hz capability.

Why so small? Pico is able to offer this smaller form factor by using much thinner ‘pancake’ optics, which cut the optical path significantly by ‘folding’ it back on itself through the use of polarized light and multiple lens elements.

SEE ALSO
Hands-on: Pico 'VR Glasses' Prototype is the Most Impressive VR Viewer Yet

JDI’s previous VR display, revealed in Summer 2018, was larger at 3.25 inches, but at a slightly lower pixel density of 1,001 ppi. The panel, which was 2,160 × 2,432 resolution and also clocked at 120Hz, did however boast a lower latency of 2.2 ms.

It seems with this downsizing from larger, more conventional display down to smaller ones, JDI is making a significant bet on the upcoming appeal of smaller form factor headsets. A few key trade-offs to VR ‘glasses’ as they are now is off-board processing, either by a dedicated compute unit or smartphone, typically a lack of 6DOF tracking, and a slightly lower field of view. That said, removing using friction by making VR headsets lighter and smaller may appeal to users looking to watch traditional streaming video and browsing the 2D web.

The post JDI Starts Mass Production on 1,058 ppi High Pixel Density LCD for VR Glasses appeared first on Road to VR.

Facebook Says It Has Developed the ‘Thinnest VR display to date’ With Holographic Folded Optics

$
0
0

Facebook published new research today which the company says shows the “thinnest VR display demonstrated to date,” in a proof-of-concept headset based on folded holographic optics.

Facebook Reality Labs, the company’s AR/VR R&D division, today published new research demonstrating an approach which combines two key features: polarization-based optical ‘folding’ and holographic lenses. In the work, researchers Andrew Maimone and Junren Wang say they’ve used the technique to create a functional VR display and lens that together are just 9mm thick. The result is a proof-of-concept VR headset which could truly be called ‘VR glasses’.

The approach has other benefits beyond its incredibly compact size; the researchers say it can also support significantly wider color gamut than today’s VR displays, and that their display makes progress “toward scaling resolution to the limit of human vision.”

Let’s talk about how it all works.

Why Are Today’s Headsets So Big?

Photo by Road to VR

It’s natural to wonder why even the latest VR headsets are essentially just as bulky as the first generation of headsets that launched back in 2016. The answer is simple: optics. Unfortunately the solution is not so simple.

Every consumer VR headset on the market uses effectively the same optical pipeline: a macro display behind a simple lens. The lens is there to focus the light from the display into your eye. But in order for that to happen the lens need to be a few inches from the display, otherwise it doesn’t have enough focusing power to focus the light into your eye.

That necessary distance between the display and the lens is the reason why every headset out there looks like a box on your face. The approach is still used today because the lenses and the displays are known quantities; they’re cheap & simple, and although bulky, they achieve a wide field of view and high resolution.

Many solutions have been proposed for making VR headsets smaller, and just about all of them include the use of novel displays and lenses.

The new research from Facebook proposes the use of both folded optics and holographic optics.

Folded Optics

What are folded optics? It’s not quite what it sounds like, but once you understand it, you’d be hard pressed to come up with a better name.

While the simple lenses in today’s VR headsets must be a certain distance from the display in order to focus the light into your eye, the concept of folded optics proposes ‘folding’ that distance over on itself, such that the light still traverses the same distance necessary for focusing, but its path is folded into a more compact area.

You can think of it like a piece of paper with an arbitrary width. When you fold the paper in half, the paper itself is still just as wide as when you started, but it’s width occupies less space because you folded it over on itself.

But how the hell do you do that with light? Polarization is the key.

Image courtesy Proof of Concept Engineering

It turns out that beams of light have an ‘orientation’. Normally the orientation of light beams at random, but you can use a polarizer to only let light of a specific orientation pass through. You can think of a polarizer like the coin-slot on a vending machine: it will only accept coins in one orientation.

Using polarization, it’s possible to bounce light back and forth multiple times along an optical path before eventually letting it out and into the wearer’s eye. This approach (also known as ‘pancake optics’ allows the lens and the display to move much closer together, resulting in a more compact headset.

But to go even thinner—to shrink the size of the lenses themselves—Facebook researchers have turned to holographic optics.

Holographic Optics

Rather than using a series of typical lenses (like the kind found in a pair of glasses) in the folded optics, the researchers have formed the lenses into… holograms.

If that makes your head hurt, everything is fine. Holograms are nuts, but I’ll do my best to explain.

Unlike a photograph, which is a recording of the light in a plane of space at a given moment, a hologram is a recording of the light in a volume of space at a given moment.

When you look at a photograph, you can only see the information of the light contained in the plane that was captured. When you look at a hologram, you can look around the hologram, because the information of the light in the entire volume is captured (also known as a lightfield).

SEE ALSO
Hand-tracking Text Input System From Facebook Researchers Throws Out the Keyboard (sort of)

Now I’m going to blow your mind. What if when you captured a hologram, the scene you captured had a lens in it? It turns out, the lens you see in the hologram will behave just like the lens in the scene. Don’t believe me? Watch this video at 0:19 at look at the magnifying glass in the scene and watch as it magnifies the rest of the hologram, even though it is part of the hologram itself.

This is the fundamental idea behind Facebook’s holographic lens approach. The researchers effectively ‘captured’ a hologram of a real lens, condensing the optical properties of a real lens into a paper-thin holographic film.

So the optics Facebook is employing in this design is, quite literally, a hologram of a lens.

Continue Reading on Page 2: Bringing it All Together

The post Facebook Says It Has Developed the ‘Thinnest VR display to date’ With Holographic Folded Optics appeared first on Road to VR.

Facebook Reality Labs Shows Method for Expanding Field of View of Holographic Displays

$
0
0

Researchers from Facebook’s R&D department, Facebook Reality Labs, and the University of California, Berkeley have published new research which demonstrates a method for expanding the field-of-view of holographic displays.

In the paper, titled High Resolution Étendue Expansion for Holographic Displays, researchers Grace Kuo, Laura Waller, Ren Ng, and Andrew Maimone explain that when it comes to holographic displays there’s an intrinsic inverse link between a display’s field-of-view and its eye-box (the eye-box is the area in which the image from a display can be seen). If you want a larger eye-box, you get a smaller field-of-view. And if you want a larger field of view, you get a smaller eye-box.

If the eye-box is too small, even the movement from the rotation of your eye would make the image invisible because your pupil would leave the eye-box when looking any direction but forward. A large eye-box is necessary not only to keep the image visible during eye movement, but also to compensate for subtle differences in headset fit from one session to the next.

The researchers explain that a traditional holographic display with a 120° horizontal field-of-view would have an eye-box of just 1.05mm—far too small for practical use in a headset. On the other hand, a holographic display with a 10mm eye-box would have a horizontal field-of-view of just 12.7°.

If you want to satisfy both a 120° field-of-view and a 10mm eye-box, the researchers say, you’d need a holographic display with a resolution of 32,500 × 32,500. That’s not only impractical because such a display doesn’t exist, but even if it did, rendering that many pixels for real-time applications would be impossible with today’s hardware.

So, the researchers propose a different solution, which is decouple the link between field-of-view and eye-box in a holographic display. The method proposes the use of a scattering element placed in front of the display which scatters the light to expand its cone of propagation (also known as étendue). Doing so allows the field-of-view and eye-box characteristics to be adjusted independently.

But there’s a problem of course. If you put a scattering element in front of a display, how do you form a coherent image from the scattered light? The researchers have developed an algorithm which pre-compensates for the scattering element, such that the ‘scattered’ light actually forms a proper image after being scattered.

At a high level, it’s very similar to the approach that existing headsets use to handle color separation (chromatic aberration) as light passes through the lenses—rendered frames pre-separate colors so that the lens ends up bending the colors back into the correct place.

Here the orange box represents the field of view of a normal holographic display while the full frame shows the expanded field of view | Image courtesy Facebook Reality Labs

The researchers used optical simulations to hone their algorithm and then built a benchtop prototype of their proposed pipeline to experimentally demonstrate the method for expanding the field of view of a holographic display.

Although the researchers believe their work “demonstrates progress toward more practical holographic displays,” they also say that there is “additional work to be done to achieve a full-color display with high resolution, complete focal depth cues, and a sunglasses-like form factor.”

Toward the end of the paper they identify miniaturization, compute time, and perceptual effects among the challenges needed to be addressed by further research.

The paper also hints at potential future projects for the team, which may be to attempt to combine this method with prior work from one of the paper’s researchers, Andrew Maimone.

“The prototype presented in this work is intended as a proof-of-concept; the final design is ideally a wearable display with a sunglasses-like form factor. Starting with the design presented by Maimone et al. [2017], which had promising form factor and FoV but very limited eyebox, we propose integrating our scattering mask into the holographic optical element that acts as an image combiner.”

Image courtesy Facebook Reality Labs

If you read our article last month on Facebook’s holographic folded optics, you may be wondering how these projects differ.

The holographic folded optics project makes use of a holographic lens to focus light, but not a holographic display to generate the image in the first place. That project also employs folded optics to significantly reduce the size of such a display.

On the other hand, the research outlined in this article deals with making actual holographic displays more practical by showing that a large field-of-view and large eye-box are not mutually exclusive in a holographic display.

The post Facebook Reality Labs Shows Method for Expanding Field of View of Holographic Displays appeared first on Road to VR.

Stanford & Samsung Develop Ultra-dense OLED Display Capable of 20,000 PPI

$
0
0

Researchers at Stanford and Samsung Electronics have developed a display capable of packing in greater than 10,000 pixels per inch (ppi), something that’s slated to be used in VR/AR headsets and contact lenses of the future.

Over the years, research and design firms like JDI and INT have been racing to pave the way for ever higher pixel densities for VR/AR displays, astounding convention-goers with prototypes boasting pixel densities in the low thousands. The main idea is to reduce the perception of the dreaded “Screen Door Effect”, which feels like viewing an image in VR through a fine grid.

Last week however researchers at Stanford University and Samsung’s South Korea-based R&D wing, the Samsung Advanced Institute of Technology (SAIT), say they’ve developed an organic light-emitting diode (OLED) capable of delivering greater than 10,000ppi.

In the paper (via Stanford News), the researchers outline an RGB OLED design that is “completely reenvisioned through the introduction of nanopatterned metasurface mirrors,” taking cues from previous research done to develop an ultra-thin solar panel.

Image courtesy Stanford University, Samsung Electronics

By integrating in the OLED a base layer of reflective metal with nanoscale corrugations, called an optical metasurface, the team was able to produce miniature proof-of-concept pixels with “a higher color purity and a twofold increase in luminescence efficiency,” making it ideal for head-worn displays.

Furthermore, the team estimates that their design could even be used to create displays upwards of 20,000 pixels per inch, although they note that there’s a trade-off in brightness when a single pixel goes below one micrometer in size.

Stanford materials scientist and senior author of the paper Mark Brongersma says the next steps will include integrating the tech into a full-size display, which will fall on the shoulders of Samsung to realize.

It’s doubtful we’ll see any such ultra-high resolution displays in VR/AR headsets in the near term—even with the world’s leading display manufacturer on the job. Samsung is excellent at producing displays thanks to its wide reach (and economies of scale), but there’s still no immediate need to tool mass manufacturing lines for consumer products.

That said, the next generation of VR/AR devices will need a host of other complementary technologies to make good use of such a ultra-high resolution display, including reliable eye-tracking for foveated rendering as well as greater compute power to render ever more complex and photorealistic scenes—things that are certainly coming, although aren’t here yet.

The post Stanford & Samsung Develop Ultra-dense OLED Display Capable of 20,000 PPI appeared first on Road to VR.

CREAL Raises $7.2 Million to Bring its Light-field Display to AR Glasses

$
0
0

Switzerland-based CREAL is developing a light-field display which it hopes to bring to VR headsets and eventually AR glasses. In November the company raised CHF 6.5 million (~$7.2 million) in a Series A+ investment round to bring on new hires and continue miniaturizing the company’s light-field tech.

Creal says it closed its Series A+ investment round in mid-November, raising CHF 6.5 million (~$7.2 million) led by Swisscom Ventures with participation by existing investors Investiere, DAA Capital Partners, and Ariel Luedi. The new funding marks ~$15.5 million raised by the company thus far.

Over the last few years we’ve seen Creal make progress in shrinking its novel light-field display with the hopes of fitting it into AR glasses. Compared to the displays used in VR and AR headsets today, light-field displays generate an image that accurately represents how we see the real world. Specifically, light-field displays support both vergence and accommodation, the two focus mechanisms of the human visual system. Creal and others say the advantage of such displays is more realistic and more comfortable visuals for VR and AR headsets. For more on light-fields, see our explainer below.

Light-fields are significant to AR and VR because they’re a genuine representation of how light exists in the real world, and how we perceive it. Unfortunately they’re difficult to capture or generate, and arguably even harder to display.

Every AR and VR headset on the market today uses some tricks to try to make our eyes interpret what we’re seeing as if it’s actually there in front of us. Most headsets are using basic stereoscopy and that’s about it—the 3D effect gives a sense of depth to what’s otherwise a scene projected onto a flat plane at a fixed focal length.

Such headsets support vergence (the movement of both eyes to fuse two images into one image with depth), but not accommodation (the dynamic focus of each individual eye). That means that while your eyes are constantly changing their vergence, the accommodation is stuck in one place. Normally these two eye functions work unconsciously in sync, hence the so-called ‘vergence-accommodation conflict’ when they don’t.

On more advanced headsets, ‘varifocal’ approaches dynamically shift the focal length based on where you’re looking (with eye-tracking). Magic Leap, for instance, supports two focal lengths and jumps between them as needed. Oculus’ Half Dome prototype does the same, seems to support a larger number of focal lengths. Even so, these varifocal approaches still have some inherent issues that arise because they aren’t actually displaying light-fields.

Having demonstrated the fundamentals of its light-field tech, Creal’s biggest challenging is miniaturizing it to fit comfortably into AR glasses while maintaining a wide enough field of view to remain useful. We saw progress on that front early this year at CES 2020, the last major conference before the pandemic cancelled the remainder for the year.

Through-the-lens: The accurate blur in the background is not generated, it is ‘real’, owed to the physics of light-fields. | Image courtesy CREAL

Creal co-founder Tomas Sluka tells Road to VR that this Summer the company has succeeded in bringing its prototype technology into a head-mounted form-factor with the creation of preliminary AR and VR headset dev kits.

Beyond ongoing development of the technology, a primary driver for the funding round was to pick up new hires that had entered the job market, Sluka said, after Magic Leap’s precarious funding situation and ousting of CEO Rony Abovitz earlier this year.

Image courtesy CREAL

CREAL doesn’t expect to bring its own headset to market, but is instead positioning itself to work with partners and eventually license its technology for use in their headsets. The company aims to build a “complete technology package for the next-generation Augmented Reality (AR) glasses,” which will likely take the form of a reference design for commercialization.

The post CREAL Raises $7.2 Million to Bring its Light-field Display to AR Glasses appeared first on Road to VR.

Viewing all 89 articles
Browse latest View live