4

Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions

 7 months ago
source link: https://kguttag.com/2024/02/16/apple-vision-pros-avp-image-quality-issues-first-impressions/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Speaking at SID LA One Day Conference Feb 23, 2024

image-2.png?resize=858%2C517&ssl=1

As it is one week away, I want to mention again that I will speak at the SID LA One Day Conference on Feb 23, 2024. The main topic of the talk will be “Interesting Mixed Realty Things I Saw at CES and the AR/VR/MR conferences,” but I will likely include some of my Apple Vision Pro experience.

Introduction

I often say and write, “The simple test patterns are often the toughest for display systems to get right because the eye will know when something is wrong. If a flat white image is displayed and you see color(s), you know something is wrong. Humans are terrible judges of absolute color, including various white color temperatures, but the eye is sensitive to variations in color. As will be shown, the Apple Vision Pro (AVP) fails the simple (mostly) white display test. While not as horrible as some other headsets, you would never buy a modern TV or computer monitor with such poor white uniformity.

89002-2018-interlace-non-res-chart-1080p.png?resize=1506%2C847&ssl=1

Test Pattern Used for this Article

For testing resolution, the simplest thing to do is put up “line pairs” and see if you can see the right number of lines and whether they are blurry. Once again, as will be shown, the AVP has problems. The main test pattern for today will combine a mostly white image with a series of lines on it to test for white uniformity, a series of vertical and horizontal lines. The AVP has some serious problems displaying even modestly high-resolution content that was expected based on basic sampling theory (discussed in Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous and Apple Vision Pro (Part 5C) – More on Monitor Replacement is Ridiculous), but has some “unusual” worst than expected behavior due to processing by the AVP.

Simple 2-D images, as often occur in “simple office applications,” are often the most challenging for the AVP, or any headset, to present as stationary flat objects in 3-D space. In addition to dealing with 3-D translations, the AVP’s optics are highly distorting, and the distortion is a function of where the eye is located and pointing. The result is that every pixel in the 2-D image must be resampled at least once, if not more than once, resulting in an inevitable loss of fidelity even of 2-D images much lower in resolution than the AVP’s display.

Anyone with even a basic knowledge of digital image and signal processing should know they are fundamental problems; it’s not that the AVP is doing something particularly wrong (although there is also some “wrong” behavior), but rather, it is an impossible problem to solve well, given the need for optical correction, basic sampling theory, and the AVP display resolution is lower than the eye’s resolution.

AVP-and-MQP-pancake-optics-side-by-side-IMG_0465-002-800w.jpg?resize=590%2C1024&ssl=1

AVP (top) & MQ3 (bottom) Optics on a 14″ Macbook

Distorting Optics

iFixit was kind enough to lend me the AVP, the display, and pancake optics from iFixit’s Vision Pro Teardown Part 2: What’s the Display Resolution? and a Meta Quest 3 display and pancake optics (which are similar to the Meta Quest Pro optics) from Meta Quest 3 Teardown and the Future of VR Repairability. I extracted the lenses from their housing to get a picture through the optics of a MacBook 14 screen of a spreadsheet grid (right, taken from “far away” from the eye position). Since the circular polarizers were glued to the display devices. I removed a circular polarizer from a pair of REALD-type 3-D glasses (see here for how REALD glasses work). The picture (right) shows the setup taken from “far” away (eye-view a bit later).

For more on Pancake Optics and why I needed to add the circular polarizer (a linear polarizer plus a quarter waveplate), see Apple Vision Pro (Part 4) – Hypervision Pancake Optics Analysis and Meta (aka Facebook) Cambria Electrically Controllable LC Lens for VAC? (which includes diagrams explaining how pancake optics work). As an interesting aside for the optics nerds, the AVP and MQ3 optics require oppositely left and right-hand circularly polarized light (fortunately, REALD-type glasses come with one of each).

2024-02-iFixit-avp-screen-td-thumbnail-cropped-copy.jpg?resize=1024%2C811&ssl=1

iFixit also took a picture of the AVP’s OLED display with the optic removed (left), which shows how much the image must be pre-corrected due to the distortion of the optics.

Below are pictures through the AVP (below left) and MQP pancake optics on top of a 14″ MacBook Pro M3 Pro with a spreadsheet with a square grid. The camera is close to the optics (widest FOV). The MacBook pixels are about 13.33 times bigger linearly than the AVP pixels, or ~178 AVP pixels will fit inside a single MacBook pixel. Also, the distances from the display to the optics were not exact. So, the images below only give a rough idea of the optic distortion. If you click on the image, you will be able to see that red, green, and blue colors are separating (chroma aberrations). Since the Macbook pixels are huge compared to the AVP, multiple AVP pixel chroma aberrations exist in roughly the outer 1/3rd of the FOV.

The AVP Micro-OLED display is about 1.1 inches wide, whereas the MQ3 Display is ~1.8 inches wide, so the AVP must be magnified by about 1.6x more for about the same FOV. Thus, the letter “H” looks bigger via the AVP’s optics. We are looking for the distortion in the lines and the rate of change in the size of the letters.

2024-02-15-8560-AVP-and-MQ3-optics-on-top-of-macbook-one-third-copy-1.jpg?resize=1024%2C369&ssl=1

2024-02-15-8560-AVP-and-MQ3-optics-on-top-of-macbook-overlay-plus1-copy.jpg?resize=957%2C686&ssl=1

The image on the right overlays the AVP’s distorting lines (in red) on top of the MQ3. Even though the AVP magnifies by about 1.6x more, the distortion seems similar, which is a remarkable accomplishment, but both are still highly geometrically distorting, like almost all VR optics.

To support the wide FOV with a relatively (to the Meta Quest Pro and Quest 3) small display, Apple has developed a more radical approach of curving the quarter waveplate and having a concave lens on the eye side of the optics (below left, from an interesting analysis by Hypervision). In contrast, the Meta pancakes (below right) have a flat quarter waveplate and convex lens on both the eye and display sides.

2024-AVP-Concave-MQ3-Convex-IMG_0472-003.jpg?resize=1024%2C254&ssl=1
image-3.png?ssl=1
meta-pancake-640w.jpg?ssl=1

The Apple design with the concave surface is thought to require eye-tracking correction to work without significant distortion, including pupil swimming and color problems. If the eye tracking becomes “confused” by, say, a person wearing glasses or by closing their eyes to a slit with just enough to see, the displayed image can become “unstable” in geometry and color.

By definition, quarter waveplates (QWP) are color=wavelength dependent, and they are also significantly affected in color and affect polarization by the angle of the incident light dependent. Having a curve, QWP is necessitated by the optics design.

image-5.png?resize=1024%2C683&ssl=1

There is no free lunch with digital pre-correction; the resampling to remove the distortion comes at the expense of resolution. Display pixels in the center of the FOV are less magnified, while pixels are more magnified, moving out from the center. A thin line might be the size of one pixel in the center of the FOV and will be less than 1/3rd the size of a pixel on the outer part of the FOV.

I discussed the issues of optical distortion in Apple Vision Pro (Part 5B) using the Meta Quest Pro as an example. In the best case, the distortion correction can be done in combination with rendering and 3-D mapping, so the resampling resolution loss is only taken once. But often, resampling is done more than once with bitmap images when represented in 3-D space for practicality and simplicity of software. I don’t know if this is the case for the AVP, but I suspect it. The first resampling is to a larger image size, followed by the resampling into 3-D space. In the case of the AVP, there are clearly different resamplings for the foveated and non-foveated regions.

2024-AVP-Foveated-video-still-frame-copy.jpg?resize=1024%2C625&ssl=1

Eye Tracking Correction

Apple is using eye tracking to correct the optics in addition to foveated rendering, and most of the time, thanks to eye tracking and processing technology, the user will be unaware of all the dynamic corrections being applied. Occasionally, the eye-tracking-based rendering can go very wrong, as I showed last time in Spreadsheet “Breaks,” The Apple Vision Pro’s (AVP) Eye-Tracking/Foveation & the First Through-the-optics Pictures. The AVP can display bizarre results when the enhancements are combined with foveated rendering.

While the native spreadsheet caused dramatic problems I wrote about previously, I have seen similar eye-tracking artifacts with some static bitmaps (to be shown in a future article). Eye tracking and foveated rendering are clearly being done with bitmap images.

One problem I have seen with the AVP in both cases is that the Foveated portion has generally had “contrast enhancement,” with the unwanted side effect of not preserving average brightness, thus making the foveated region boundary visible. There is also increased aliasing scintillation (wiggling) in the foveated rendered area, as would be expected as it is rendered in higher resolution/sharper.

To be fair, most of the time, the foveated rendering does a good job. But it can fail either constantly with some images or just occasionally with others. Whether the failure is visible can depend on the image’s source (native or, say, a mirror of the MacBook Pro).

Eye movement involves translation and rotation; thus, the eyes will look through the optics at different places and different angles. This change in location and angle will cause the optics to behave differently, which would typically cause “pupil swim,” which is a varying distortion (wobbling) with eye movement. The angled look through optics will also cause chromatic aberrations (color fringing). The AVP’s pancake optics will also cause large area color variations with eye movement. Overall, the AVP seems to do a good job of digitally removing pupil swim and chromatic aberrations.

Almalence demonstration of pupil swim and their robot-controlled camera for developing eye tracking-based optics correction

The company Almalence, which I met with at both CES (in the PixMax booth) and the AR/VR/MR (in their own booth), develops software for correcting various optical problems caused by eye movement based on eye tracking. They have even demonstrated the ability to improve resolution, which I saw using PixMax (and I have heard independent reports saying it works well in practice and not just demos). Almalence has developed an eye-tracking correction for several different headsets. The video (left) demonstrates the “pupil swim” issue with before and after views. Almalence uses this “eye simulator” to develop its eye-tracking-based optical correction.

AVP Eye Tracking is a “Must Have for the Optics to Work – A “Nice to Have” for Selection

While all the marketing attention is on using eye tracking for input selection, eye tracking is critical to generating a good image, more so than prior optics. This also helps explain why very specific and characterized inserts are required for vision correction, even though there is enough room and eye relief for glasses with small frames to fit.

In my experience and use, the eye and hand tracking-based selection as they currently work are “nice to have” as a secondary selection method. But you really need a trackpad or mouse and keyboard to do serious work. Yes, it is good to have the ability to select without needing another physical device. Still, it can also be a terrible time-consuming nuisance as the main/only input device. With a physical device, your eyes will naturally look ahead as you click, but with the AVP, this will cause you to click on the wrong thing. Recovering from an inadvertent eye or finger movement can be a pain to undo and then do what was desired. Additionally, it it simply not accurate enough to pick small items.

AVP’s FOV is Highly Variable Based on Eye Distance but with only a Very Small Change in Magnification

Below are two pictures taken through the AVP’s optics for the left eye showing the FOV using the Zeiss optical inserts using the 25W face adapter (see Reddit topic Apple Vision Pro Light Seals decoded) and as close as possible to the optics with no face adapter. For the optics inserts to correctly correct vision correction, they must maintain a Vertex Distance (distance from the eye to the lens), resulting in a deeper face adapter usually being recommended if you order inserts. As has been widely reported, the AVP’s FOV increases dramatically if you remove the light seal and move your eye as close as possible to the optics (below right).

2024-02-09-10-8423-min-and-max-fov-copy.jpg?resize=1024%2C342&ssl=1

Both pictures above were taken with the light shield and optical inserts removed, as the light shield would mechanically interfere, and the insert would mess up the camera optically. The camera was moved on a tripod with a “macro focusing rail” to position the camera to approximate the FOV as seen by my eye (and why there is the spreadsheet grid with a “ruler” on it.

Interestingly, while the FOV changes dramatically, the magnification between the two images increases by only about 1% (1.01 times) as the camera/eye moves closer (see inset on the above right picture).

89002-2018-interlace-non-res-chart-1080p.png?resize=1506%2C847&ssl=1

Test Pattern Used

Through the Optics Image

The picture below shows roughly the FOV I see with the Zeiss inserts. The picture was taken by a Canon R5 with a 16mm lens using the camera’s 9-way pixel shift to produce a 400 mp initial image. That picture was then scaled down by a factor of 3 linearly (click on the image below to see the ~45mp image). The test pattern (in lossless PNG format) can be found on my Test Pattern Page or by clicking on the image on the right.

The test pattern has 1920 by 1080 pixels or just over half the resolution of each AVP OLED display (according to iFixit, the lit area totals 3660 px by 3200 pixels). Since the spreadsheet doesn’t fill the FOV, the 1920 horizontal pixels in the test pattern are mapped into very roughly 3000 AVP pixels of varying sizes due to optical distortion correction. The AVP does a very good job of correcting the geometric distortion of the optics overall.

AVP-on-White-Spreadsheet-2024-02-08-09-8381-16mm-one-third-copy-1.jpg?resize=1024%2C683&ssl=1

Color Uniformity Issues

As I wrote in the Introduction, simple, mostly white images test a display color uniformity (known as “Color Purity” back in the days of CRTs). The camera is more “objective” because the human visual system dynamically readjusts colors both from image to image and within a single image, so the camera will make the problem look worse than it may appear to the eye. Still, there is definitely color uniformity problems with the AVP that I see with my eyes as well. There is a cyan ring (lack of red) on the outside of every image, and the center of the screen has splotches of color (most often red/pink).

The amount of color variation is not noticeable in typical colorful scenes like movies and photographs. Still, it is noticeable when displaying mostly white screens, as commonly occurs with web browsing, word processing, or spreadsheets.

The size and shape of the outer cyan ring and center red splotches will vary with how close the eye gets to the optics (see earlier picture comparing FOV sizes based on eye distance). It is also known that the AVP’s eye tracking is used to try to correct for color variation. I have seen some bizarre color effects when eye tracking is lost.

Close Up Full Resolution Crop Showing Center Details

The image below is a crop from the center of the original 400-megapixel picture. I tried to pack a lot in this image, including some pieces of the source image scaled up to about the same size as the image, a view through a 50 mm lens (with 3.125 times the center resolution of the original 16mm lens), which was used to estimate the FOV of each center pixel, plus some highly magnified overlays show details in the lines in the test sub-patterns.

2024-4-AVP-TTL-center-region-001-copy-1.jpg?resize=1024%2C529&ssl=1

A very useful feature of the AVP is what I call the “eye-tracking cursor,” what Apple calls the eye pointer, which is available in the AVP’s “Accessibility menu.” I also modified the pointer to have a red ring to help it stand out. The cursor can be turned on and off with a triple click on the crown dial. This cursor is particularly important when taking pictures to know where the AVP thinks the “eye” (camera lens) is pointing. It can also be useful when using eye tracking as a selection device. For this first set of pictures, the eye tracking with the cursor in the center of the screen where I wanted it and confirming that the eye tracking was not “lost.”

About 44.4 pixels per degree (PPD) in the center of the image – Gives ~20/30 vision in the center and worse elsewhere

image-8.png?resize=822%2C1024&ssl=1

I have been comparing and scaling high-resolution images with a 50mm lens with a narrow FOV where pixel boundaries are clearly visible and fitting them to the images of the much wider FOV 16mm lens for the purpose of determining the pixels per degree in the center of the AVP screen. The result I get is that there are about 44.4 pixels per degree (PPD) in the center of the image.

Having ~44.4 ppd gives (confirmed by looking at a virtual Snellen eye chart) about 20/30 vision in the center. This is the best case in the center of the screen, directly, not through the cameras, which are worse (more like 20/35 to 20/40). The resolution drops if you look beyond the center 1/3rd of the FOV, even with eye-tracking foveated rendering. With the AVP, you have somewhat poor vision, which it seems to try to compensate for by defaulting to making everything bigger (more on this in a bit).

The problems of resampling

From the 50mm lens shot, I made “pixel rulers” (rows and columns of red lines) to show the pixel boundaries versus various features in the test pattern. A magnified close-up of the higher-resolution image and the rulers is shown in the lower right corner labeled 1c.

Across the whole test pattern are sets of four lines, followed by a gap of two pixels and then four more lines. If you look at inset 1a, you will notice that the AVP has turned both sets of four lines into only three lines each. If you look at the longer set of these lines, for example, under the large #1, you will see the lines “wobbling” in the gaps and spacing but always, at best, three lines. These lines are constantly wiggling even if you hold your head steady. If you look at the four vertical lines to the right of the large #1, they are barely distinguishable as multiple lines.

The same 4-lines becoming 3-lines happens with the center test target. See the magnified section 1b above. As should be expected based on sampling theory, it takes more than two times the resolution of the display to represent arbitrarily oriented lines in 3-D space. Not shown in the still pictures is that everything “scintillates” (display pixel size flashes) and wiggles with any microscopic or macroscopic head movement. Even when one moves closer to so, there are well more than two pixels in the AVP’s display for every pixel (and above the two times the base “frequency” of the lines) in the test pattern such that there is clearly more the right number of lines being displayed, but there is still scintillation and wiggling.

Computer-generated images with sharp edges, including everyday applications like word processing, simple presentation graphics and charts, and spreadsheets, are very hard to reproduce when locked into 3-D space (see the Appendix for more information).

AVP-on-White-Spreadsheet-2024-02-08-09-8381-16mm-25-percent-and-cropped.jpg?resize=1024%2C747&ssl=1

Foveated Rendering

Returning to the original full camera image above, a large dash line square roughly indicates the foveated rendering boundary. The image below takes a full-resolution crop showing a horizontal boundary (2a) and a vertical boundary (2b).

Looking at the two sets of 4 lines, the display’s resolution due to optical distortion and resampling has already dropped to where there are closer to 2 lines distinctly visible from the original 4. So, even without foveation, the resolution is dropping by this point in the display.

2024-02-AVP-foveated-boundaries-2a-and-2b-copy.jpg?resize=1024%2C428&ssl=1

The AVP’s Make Everything Big and Bold “Trick”

The AVP processes and often over-processes images. The AVP defaults make everything BIG, whether they are AVP native or Macbook mirrored. I see behavior as a “trick” to make the AVP’s resolution seem better. In the case of native windows, I had to fix the window and then move back from it to work around these limitations. There are fewer restrictions on MacBook mirroring window sizes, but the default is to make windows and their content bigger.

The AVP also likes to try to improve contrast and will oversize the edges of small things like text, which makes everything look like it was printed in BOLD. While this may make things easier to read, it is not a faithful representation of what is meant to be displayed. This problem occurs both with “native” rendering (drawing a spreadsheet) as well as when displaying a bitmapped image. As humans perceive better contrasts as having higher resolution, making things bolder is another processing trick to give the impression of higher resolution.

I see different processing and artifacts happening when natively rendering on the AVP (such as when running Excel), displaying a saved bitmap from a file on the AVP, displaying a bitmap image on a web page, and mirroring the content of a MacBook. With each test image, seeing how it will display differently with each display mode is an adventure.

The sizing restrictions mostly go away when it replicates the display of a MacBook. I use a MacBook Pro M3 Pro with a 14″ 3024 x 1964 display and a ~1.54:1 aspect ratio. The aspect ratio of the mirrored MacBook display is ~1.78:1 (16:9).

Based on other reports and my observations, the AVP does different processing when natively rendering images from display lists versus displaying bitmapped images and mirroring a MacBook.

According to The Verge on mirroring a Mac:

“There is a lot of very complicated display scaling going on behind the scenes here, but the easiest way to think about it is that you’re basically getting a 27-inch Retina display, like you’d find on an iMac or Studio Display. Your Mac thinks it’s connected to a 5K display with a resolution of 5120 x 2880, and it runs macOS at a 2:1 logical resolution of 2560 x 1440, just like a 5K display. (You can pick other resolutions, but the device warns you that they’ll be lower quality.) That virtual display is then streamed as a 4K 3560 x 2880 video to the Vision Pro, where you can just make it as big as you want. The upshot of all of this is that 4K content runs at a native 4K resolution — it has all the pixels to do it, just like an iMac — but you have a grand total of 2560 x 1440 to place windows in, regardless of how big you make the Mac display in space, and you’re not seeing a pixel-perfect 5K image.”

This certainly makes sense and seems to agree with what I am seeing. It looks like the AVP first renders the image at a higher than native resolution and then scales/resamples that high resolution into 3-D space. The problem is that even if you scale up a bitmap to a much higher resolution, some detail will be lost (see Appendix on Nyquist rate).

The process appears to be different for bitmaps stored directly on the AVP as I seem to see different artifacts depending on whether the source is coming from an AVP file, a web page, or by mirroring the Macbook (I’m working on more studies of this issue).

When opening the Macbook spreadsheet in an AVP window, the default is to make the fonts about 1.6x bigger angularly. They need to be so to be roughly as readable as they are on the 14″ MacBook Pro. Combined with the wider aspect ratio, the default-sized window fills about 2.7x the horizontal field of view of the 14″ Macbook Pro at “typical typing distance” for me and is so wide that I needed to turn my head to see it all.

I can hear people ask, “So it is bigger. Is that bad if it is still readable?” It is bad in the sense that information/content density has gone down. To read the same content, the eyes will have to move more.

OptoFidelity and Gamma Scientific Optical Performance Studies

image-4.png?resize=959%2C539&ssl=1

OptoFidelity Buddy Motion Tester

My studies use conventional camera equipment to capture what the eye sees to give a heuristic feel for how the various headsets perform. Detailed evaluations necessary for both R&D and production require specialized cameras with robots to simulate eye and head movement.

Gamma-Scientific-AR-VR-MR-2024-01-30-12-9922-copy.jpg?resize=1024%2C991&ssl=1

While at the AR/VR/MR conference, I met with Gamma Scientific and Optofidelity, each of whom manufactures headset testing equipment and is in the process of evaluating the Apple Vision Pro’s optical system. Optofidelity does more of a dynamic motion analysis, whereas Gamma Scientific is doing a more detailed optical study (as I understand their differences). It will be interesting to see the results of their different forms of testing.

Quoting from public statements by Gamma Scientific and OptoFidelity:

Gamma Scientific is leveraging their NED™ RoboticEye™ test platform to conduct reference optical quality measurements on the Apple Vision Pro, objectively characterizing how a user will experience the VR display. These include key performance metrics such as brightness uniformity, color uniformity, foveated contrast, qualified FOV, eyebox volume, etc. Their reporting will be critical in benchmarking the AVP against latest international standards for AR/VR display metrology.

OptoFidelity announcement: We are excited to inform you that we will comprehensively evaluate the Apple Vision Pro using the BUDDY test system. Our testing will cover a range of performance metrics for the Vision Pro, including:

2024-02-14-Optofidelity-photo-to-photon-delay.png?resize=480%2C290&ssl=1

OptoFidelity Passthrough MR Comparison

  • Angular Motion-to-Photon Latency
  • Angular Jitter in a Stationary Position
  • Angular Jitter During Movement
  • Pose Repeatability (both Angular and Linear)
  • See-Through Latency (Photon-to-Photon)

I plan to share Gamma Scientific and OptoFidelity results on this blog.

Optofidelity has already posted its first results in its blog article Apple Vision Pro Benchmark Test 1: See-Through Latency, Photon-to-Photon (right) and APPLE VISION PRO BENCHMARK TEST 2: Angular Motion-to-Photon Latency in VR (below) The first study confirms Apple’s claim that the AVP has less than a 12ms “photo-to-photon” delay (time from something moving to the camera displaying it) and shows that the delay is nearly four times less than the latest pass-through MR products from Meta and HTC. OptoFidelity Part 2 deals with a (head) motion to see something in the display (motion-to-photon). The Quest Pro, Quest 3, and AVP all use predictive motion for constant movement, with the AVP being particularly aggressive (lower left), but all are within a person’s ability to notice. The lower right chart uses the standard deviation for a mix of short and long movements, where prediction can be counter-productive.

image-6.png?ssl=1
image-7.png?ssl=1

What Others Think

As I prepare to post this article, we are at the two-week anniversary of the AVP being delivered to the public. We are starting to get past the “wild enthusiasm” stage, where the wonders of new technology are impressive and where people are just starting to see past the more superficial cracks, such as weight, fit, and external battery. We are getting past the “demoware” and asking, “What will this do for me on a regular basis.” That is not to say that some individuals and some applications, people won’t love it.

My analysis may be controversial, with all the “instant experts” on YouTube and social media influencers praising the AVP’s resolution and display quality. As I often say, “Anyone that has ever watched a television or slept in a Holiday Inn last night thinks they are a display expert.”

I have watched many videos and read articles on the AVP and have not seen anyone seriously discuss the color uniformity problems (I certainly may have missed someone). Snazzy Labs’ latest video is the only one I have seen that talks about antialiasing and moiré text effects and setting the images/text large by default to hide problems. Snazzy and a very few others have discussed the glare issues with the optics (which I plan to show and discuss in later articles). From a “user experience” perspective, I agree with most of The Verge’s writings and podcast comments in the last two weeks. One of my favorite quotes was originally made by Adi Robertson and cited by  Nilay Patel from The Verge; “It’s magic until it’s not.”

Conclusion and My Comments

Simply put, the AVP’s display quality is good compared to almost every other VR headset but very poor compared to even a modestly priced modern computer monitor. Today’s consumer would not pay $100 for a computer monitor that looked as bad as the AVP.

As I have said before, “Apple Does Not Get different physics.” Apple can’t beat sampling theory, and even if it had 8K displays per eye, there would be some resampling problems, but fewer as it would be at the eye’s resolution limit.

For “spatial computing applications,” the AVP “cheats” by making everything bigger. But in doing so, information density is lost, making the user’s eyes and head work more to see the same amount of content, and you simply can’t see as much at once. Making everything bolder may make the text easier to read, but it reduces the faithfulness of the original image. Most of the time, the Foveated rendering “works,” but sometimes it fails spectacularly.

Am I out to break the AVP? Yes, in a way, but I am trying to be fair about it. I know for a fact it does not have the resolution necessary/desired for some applications. I’m taking a “debugger’s approach” by using my knowledge of display optics and image processing to see how the AVP works, and then I can construct test cases to show how it fails. This is more or less the approach I used back in the 1980s and 1990s when I was a CPU architect at Texas Instruments to verify our designs before the days of nearly exhaustive testing by computers, where I had to go from designing it to work to thinking “how can I make it fail.”

Per my usual practice, I have shown my results, including providing test patterns so others can verify them.

Appendix – Some More on Resampling and Nyquist Rate

There are inevitable problems resampling below the Nyquist rate that I discussed in Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous. Still, they are made worse when compensating for AVP’s optical distortion and foveated eye tracking.

A simple horizontal line in the frequency domain looked like a square pulse function with infinite odd harmonics perpendicular to the line—a square dot has infinite harmonics in two directions. So even if the display has twice the resolution of the source image, there are going to be some “errors” that show up as artifacts. Then, with any movement, the errors move/change, drawing the eye to perceive the errors. Fundamentally and oversimplifying Nyquist, when rendering (resampling) a 2-D object in 3-D space, you need well more than twice the resolution of the original image to render it without significant problems most of the time. Software anti-aliasing can only reduce some of the ill effects at the expense of blurring the image. Even if the AVP had two times the pixels (~8K display), there are patterns that would be “challenging” to display.

I discussed the problem of drawing a single pixel (without getting to much into Nyquist sampling theory) in Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous

One-Pixel-Aliasing-copy-1.png?resize=1024%2C606&ssl=1

Like this:

Loading...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK